Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
Simulation of tunneling construction methods of the Cisumdawu toll road
NASA Astrophysics Data System (ADS)
Abduh, Muhamad; Sukardi, Sapto Nugroho; Ola, Muhammad Rusdian La; Ariesty, Anita; Wirahadikusumah, Reini D.
2017-11-01
Simulation can be used as a tool for planning and analysis of a construction method. Using simulation technique, a contractor could design optimally resources associated with a construction method and compare to other methods based on several criteria, such as productivity, waste, and cost. This paper discusses the use of simulation using Norwegian Method of Tunneling (NMT) for a 472-meter tunneling work in the Cisumdawu Toll Road project. Primary and secondary data were collected to provide useful information for simulation as well as problems that may be faced by the contractor. The method was modelled using the CYCLONE and then simulated using the WebCYCLONE. The simulation could show the duration of the project from the duration model of each work tasks which based on literature review, machine productivity, and several assumptions. The results of simulation could also show the total cost of the project that was modeled based on journal construction & building unit cost and online websites of local and international suppliers. The analysis of the advantages and disadvantages of the method was conducted based on its, wastes, and cost. The simulation concluded the total cost of this operation is about Rp. 900,437,004,599 and the total duration of the tunneling operation is 653 days. The results of the simulation will be used for a recommendation to the contractor before the implementation of the already selected tunneling operation.
NASA Astrophysics Data System (ADS)
Oh, Seok-Geun; Suh, Myoung-Seok
2017-07-01
The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-09-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Simulating the IPOD, East Asian summer monsoon, and their relationships in CMIP5
NASA Astrophysics Data System (ADS)
Yu, Miao; Li, Jianping; Zheng, Fei; Wang, Xiaofan; Zheng, Jiayu
2018-03-01
This paper evaluates the simulation performance of the 37 coupled models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) with respect to the East Asian summer monsoon (EASM) and the Indo-Pacific warm pool and North Pacific Ocean dipole (IPOD) and also the interrelationships between them. The results show that the majority of the models are unable to accurately simulate the interannual variability and long-term trends of the EASM, and their simulations of the temporal and spatial variations of the IPOD are also limited. Further analysis showed that the correlation coefficients between the simulated and observed EASM index (EASMI) is proportional to those between the simulated and observed IPOD index (IPODI); that is, if the models have skills to simulate one of them then they will likely generate good simulations of another. Based on the above relationship, this paper proposes a conditional multi-model ensemble method (CMME) that eliminates those models without capability to simulate the IPOD and EASM when calculating the multi-model ensemble (MME). The analysis shows that, compared with the MME, this CMME method can significantly improve the simulations of the spatial and temporal variations of both the IPOD and EASM as well as their interrelationship, suggesting the potential for the CMME approach to be used in place of the MME method.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Grand canonical ensemble Monte Carlo simulation of the dCpG/proflavine crystal hydrate.
Resat, H; Mezei, M
1996-01-01
The grand canonical ensemble Monte Carlo molecular simulation method is used to investigate hydration patterns in the crystal hydrate structure of the dCpG/proflavine intercalated complex. The objective of this study is to show by example that the recently advocated grand canonical ensemble simulation is a computationally efficient method for determining the positions of the hydrating water molecules in protein and nucleic acid structures. A detailed molecular simulation convergence analysis and an analogous comparison of the theoretical results with experiments clearly show that the grand ensemble simulations can be far more advantageous than the comparable canonical ensemble simulations. Images FIGURE 5 FIGURE 7 PMID:8873992
Comparison of two fractal interpolation methods
NASA Astrophysics Data System (ADS)
Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo
2017-03-01
As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has strong periodicity, which is suitable for simulating periodic surface.
Wang, Chih-Hao; Fang, Te-Hua; Cheng, Po-Chien; Chiang, Chia-Chin; Chao, Kuan-Chi
2015-06-01
This paper used numerical and experimental methods to investigate the mechanical properties of amorphous NiAl alloys during the nanoindentation process. A simulation was performed using the many-body tight-binding potential method. Temperature, plastic deformation, elastic recovery, and hardness were evaluated. The experimental method was based on nanoindentation measurements, allowing a precise prediction of Young's modulus and hardness values for comparison with the simulation results. The indentation simulation results showed a significant increase of NiAl hardness and elastic recovery with increasing Ni content. Furthermore, the results showed that hardness and Young's modulus increase with increasing Ni content. The simulation results are in good agreement with the experimental results. Adhesion test of amorphous NiAl alloys at room temperature is also described in this study.
Hierarchical Simulation to Assess Hardware and Software Dependability
NASA Technical Reports Server (NTRS)
Ries, Gregory Lawrence
1997-01-01
This thesis presents a method for conducting hierarchical simulations to assess system hardware and software dependability. The method is intended to model embedded microprocessor systems. A key contribution of the thesis is the idea of using fault dictionaries to propagate fault effects upward from the level of abstraction where a fault model is assumed to the system level where the ultimate impact of the fault is observed. A second important contribution is the analysis of the software behavior under faults as well as the hardware behavior. The simulation method is demonstrated and validated in four case studies analyzing Myrinet, a commercial, high-speed networking system. One key result from the case studies shows that the simulation method predicts the same fault impact 87.5% of the time as is obtained by similar fault injections into a real Myrinet system. Reasons for the remaining discrepancy are examined in the thesis. A second key result shows the reduction in the number of simulations needed due to the fault dictionary method. In one case study, 500 faults were injected at the chip level, but only 255 propagated to the system level. Of these 255 faults, 110 shared identical fault dictionary entries at the system level and so did not need to be resimulated. The necessary number of system-level simulations was therefore reduced from 500 to 145. Finally, the case studies show how the simulation method can be used to improve the dependability of the target system. The simulation analysis was used to add recovery to the target software for the most common fault propagation mechanisms that would cause the software to hang. After the modification, the number of hangs was reduced by 60% for fault injections into the real system.
Spectral simulations of an axisymmetric force-free pulsar magnetosphere
NASA Astrophysics Data System (ADS)
Cao, Gang; Zhang, Li; Sun, Sineng
2016-02-01
A pseudo-spectral method with an absorbing outer boundary is used to solve a set of time-dependent force-free equations. In this method, both electric and magnetic fields are expanded in terms of the vector spherical harmonic (VSH) functions in spherical geometry and the divergence-free state of the magnetic field is enforced analytically by a projection method. Our simulations show that the Deutsch vacuum solution and the Michel monopole solution can be reproduced well by our pseudo-spectral code. Further, the method is used to present a time-dependent simulation of the force-free pulsar magnetosphere for an aligned rotator. The simulations show that the current sheet in the equatorial plane can be resolved well and the spin-down luminosity obtained in the steady state is in good agreement with the value given by Spitkovsky.
Effective description of a 3D object for photon transportation in Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Suganuma, R.; Ogawa, K.
2000-06-01
Photon transport simulation by means of the Monte Carlo method is an indispensable technique for examining scatter and absorption correction methods in SPECT and PET. The authors have developed a method for object description with maximum size regions (maximum rectangular regions: MRRs) to speed up photon transport simulation, and compared the computation time with that for conventional object description methods, a voxel-based (VB) method and an octree method, in the simulations of two kinds of phantoms. The simulation results showed that the computation time with the proposed method became about 50% of that with the VD method and about 70% of that with the octree method for a high resolution MCAT phantom. Here, details of the expansion of the MRR method to three dimensions are given. Moreover, the effectiveness of the proposed method was compared with the VB and octree methods.
A scalable parallel black oil simulator on distributed memory parallel computers
NASA Astrophysics Data System (ADS)
Wang, Kun; Liu, Hui; Chen, Zhangxin
2015-11-01
This paper presents our work on developing a parallel black oil simulator for distributed memory computers based on our in-house parallel platform. The parallel simulator is designed to overcome the performance issues of common simulators that are implemented for personal computers and workstations. The finite difference method is applied to discretize the black oil model. In addition, some advanced techniques are employed to strengthen the robustness and parallel scalability of the simulator, including an inexact Newton method, matrix decoupling methods, and algebraic multigrid methods. A new multi-stage preconditioner is proposed to accelerate the solution of linear systems from the Newton methods. Numerical experiments show that our simulator is scalable and efficient, and is capable of simulating extremely large-scale black oil problems with tens of millions of grid blocks using thousands of MPI processes on parallel computers.
Collaborative simulation method with spatiotemporal synchronization process control
NASA Astrophysics Data System (ADS)
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
2015-01-01
Procedure. The simulated annealing (SA) algorithm is a well-known local search metaheuristic used to address discrete, continuous, and multiobjective...design of experiments (DOE) to tune the parameters of the optimiza- tion algorithm . Section 5 shows the results of the case study. Finally, concluding... metaheuristic . The proposed method is broken down into two phases. Phase I consists of a Monte Carlo simulation to obtain the simulated percentage of failure
NASA Astrophysics Data System (ADS)
Zhao, Leihong; Qu, Xiaolu; Lin, Hongjun; Yu, Genying; Liao, Bao-Qiang
2018-03-01
Simulation of randomly rough bioparticle surface is crucial to better understand and control interface behaviors and membrane fouling. Pursuing literature indicated a lack of effective method for simulating random rough bioparticle surface. In this study, a new method which combines Gaussian distribution, Fourier transform, spectrum method and coordinate transformation was proposed to simulate surface topography of foulant bioparticles in a membrane bioreactor (MBR). The natural surface of a foulant bioparticle was found to be irregular and randomly rough. The topography simulated by the new method was quite similar to that of real foulant bioparticles. Moreover, the simulated topography of foulant bioparticles was critically affected by parameters correlation length (l) and root mean square (σ). The new method proposed in this study shows notable superiority over the conventional methods for simulation of randomly rough foulant bioparticles. The ease, facility and fitness of the new method point towards potential applications in interface behaviors and membrane fouling research.
Faster protein folding using enhanced conformational sampling of molecular dynamics simulation.
Kamberaj, Hiqmet
2018-05-01
In this study, we applied swarm particle-like molecular dynamics (SPMD) approach to enhance conformational sampling of replica exchange simulations. In particular, the approach showed significant improvement in sampling efficiency of conformational phase space when combined with replica exchange method (REM) in computer simulation of peptide/protein folding. First we introduce the augmented dynamical system of equations, and demonstrate the stability of the algorithm. Then, we illustrate the approach by using different fully atomistic and coarse-grained model systems, comparing them with the standard replica exchange method. In addition, we applied SPMD simulation to calculate the time correlation functions of the transitions in a two dimensional surface to demonstrate the enhancement of transition path sampling. Our results showed that folded structure can be obtained in a shorter simulation time using the new method when compared with non-augmented dynamical system. Typically, in less than 0.5 ns using replica exchange runs assuming that native folded structure is known and within simulation time scale of 40 ns in the case of blind structure prediction. Furthermore, the root mean square deviations from the reference structures were less than 2Å. To demonstrate the performance of new method, we also implemented three simulation protocols using CHARMM software. Comparisons are also performed with standard targeted molecular dynamics simulation method. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparison of MM/GBSA calculations based on explicit and implicit solvent simulations.
Godschalk, Frithjof; Genheden, Samuel; Söderhjelm, Pär; Ryde, Ulf
2013-05-28
Molecular mechanics with generalised Born and surface area solvation (MM/GBSA) is a popular method to calculate the free energy of the binding of ligands to proteins. It involves molecular dynamics (MD) simulations with an explicit solvent of the protein-ligand complex to give a set of snapshots for which energies are calculated with an implicit solvent. This change in the solvation method (explicit → implicit) would strictly require that the energies are reweighted with the implicit-solvent energies, which is normally not done. In this paper we calculate MM/GBSA energies with two generalised Born models for snapshots generated by the same methods or by explicit-solvent simulations for five synthetic N-acetyllactosamine derivatives binding to galectin-3. We show that the resulting energies are very different both in absolute and relative terms, showing that the change in the solvent model is far from innocent and that standard MM/GBSA is not a consistent method. The ensembles generated with the various solvent models are quite different with root-mean-square deviations of 1.2-1.4 Å. The ensembles can be converted to each other by performing short MD simulations with the new method, but the convergence is slow, showing mean absolute differences in the calculated energies of 6-7 kJ mol(-1) after 2 ps simulations. Minimisations show even slower convergence and there are strong indications that the energies obtained from minimised structures are different from those obtained by MD.
Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces
NASA Astrophysics Data System (ADS)
Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz
2018-03-01
In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.
Two-way coupling of magnetohydrodynamic simulations with embedded particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Makwana, K. D.; Keppens, R.; Lapenta, G.
2017-12-01
We describe a method for coupling an embedded domain in a magnetohydrodynamic (MHD) simulation with a particle-in-cell (PIC) method. In this two-way coupling we follow the work of Daldorff et al. (2014) [19] in which the PIC domain receives its initial and boundary conditions from MHD variables (MHD to PIC coupling) while the MHD simulation is updated based on the PIC variables (PIC to MHD coupling). This method can be useful for simulating large plasma systems, where kinetic effects captured by particle-in-cell simulations are localized but affect global dynamics. We describe the numerical implementation of this coupling, its time-stepping algorithm, and its parallelization strategy, emphasizing the novel aspects of it. We test the stability and energy/momentum conservation of this method by simulating a steady-state plasma. We test the dynamics of this coupling by propagating plasma waves through the embedded PIC domain. Coupling with MHD shows satisfactory results for the fast magnetosonic wave, but significant distortion for the circularly polarized Alfvén wave. Coupling with Hall-MHD shows excellent coupling for the whistler wave. We also apply this methodology to simulate a Geospace Environmental Modeling (GEM) challenge type of reconnection with the diffusion region simulated by PIC coupled to larger scales with MHD and Hall-MHD. In both these cases we see the expected signatures of kinetic reconnection in the PIC domain, implying that this method can be used for reconnection studies.
NASA Astrophysics Data System (ADS)
Li, De-Chang; Ji, Bao-Hua
2012-06-01
Jarzynski' identity (JI) method was suggested a promising tool for reconstructing free energy landscape of biomolecular interactions in numerical simulations and experiments. However, JI method has not yet been well tested in complex systems such as ligand-receptor molecular pairs. In this paper, we applied a huge number of steered molecular dynamics (SMD) simulations to dissociate the protease of human immunodeficiency type I virus (HIV-1 protease) and its inhibitors. We showed that because of intrinsic complexity of the ligand-receptor system, the energy barrier predicted by JI method at high pulling rates is much higher than experimental results. However, with a slower pulling rate and fewer switch times of simulations, the predictions of JI method can approach to the experiments. These results suggested that the JI method is more appropriate for reconstructing free energy landscape using the data taken from experiments, since the pulling rates used in experiments are often much slower than those in SMD simulations. Furthermore, we showed that a higher loading stiffness can produce higher precision of calculation of energy landscape because it yields a lower mean value and narrower bandwidth of work distribution in SMD simulations.
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
Simulation Analysis of Helicopter Ground Resonance Nonlinear Dynamics
NASA Astrophysics Data System (ADS)
Zhu, Yan; Lu, Yu-hui; Ling, Ai-min
2017-07-01
In order to accurately predict the dynamic instability of helicopter ground resonance, a modeling and simulation method of helicopter ground resonance considering nonlinear dynamic characteristics of components (rotor lead-lag damper, landing gear wheel and absorber) is presented. The numerical integral method is used to calculate the transient responses of the body and rotor, simulating some disturbance. To obtain quantitative instabilities, Fast Fourier Transform (FFT) is conducted to estimate the modal frequencies, and the mobile rectangular window method is employed in the predictions of the modal damping in terms of the response time history. Simulation results show that ground resonance simulation test can exactly lead up the blade lead-lag regressing mode frequency, and the modal damping obtained according to attenuation curves are close to the test results. The simulation test results are in accordance with the actual accident situation, and prove the correctness of the simulation method. This analysis method used for ground resonance simulation test can give out the results according with real helicopter engineering tests.
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Kirchhoff and Ohm in action: solving electric currents in continuous extended media
NASA Astrophysics Data System (ADS)
Dolinko, A. E.
2018-03-01
In this paper we show a simple and versatile computational simulation method for determining electric currents and electric potential in 2D and 3D media with arbitrary distribution of resistivity. One of the highlights of the proposed method is that the simulation space containing the distribution of resistivity and the points of external applied voltage are introduced by means of digital images or bitmaps, which easily allows simulating any phenomena involving distributions of resistivity. The simulation is based on the Kirchhoff’s laws of electric currents and it is solved by means of an iterative procedure. The method is also generalised to account for media with distributions of reactive impedance. At the end of this work, we show an example of application of the simulation, consisting in reproducing the response obtained with the geophysical method of electric resistivity tomography in presence of soil cracks. This paper is aimed at undergraduate or graduated students interested in computational physics and electricity and also researchers involved in the area of continuous electric media, which could find a simple and powerful tool for investigation.
SimBA: simulation algorithm to fit extant-population distributions.
Parida, Laxmi; Haiminen, Niina
2015-03-14
Simulation of populations with specified characteristics such as allele frequencies, linkage disequilibrium etc., is an integral component of many studies, including in-silico breeding optimization. Since the accuracy and sensitivity of population simulation is critical to the quality of the output of the applications that use them, accurate algorithms are required to provide a strong foundation to the methods in these studies. In this paper we present SimBA (Simulation using Best-fit Algorithm) a non-generative approach, based on a combination of stochastic techniques and discrete methods. We optimize a hill climbing algorithm and extend the framework to include multiple subpopulation structures. Additionally, we show that SimBA is very sensitive to the input specifications, i.e., very similar but distinct input characteristics result in distinct outputs with high fidelity to the specified distributions. This property of the simulation is not explicitly modeled or studied by previous methods. We show that SimBA outperforms the existing population simulation methods, both in terms of accuracy as well as time-efficiency. Not only does it construct populations that meet the input specifications more stringently than other published methods, SimBA is also easy to use. It does not require explicit parameter adaptations or calibrations. Also, it can work with input specified as distributions, without an exemplar matrix or population as required by some methods. SimBA is available at http://researcher.ibm.com/project/5669 .
Mapping a battlefield simulation onto message-passing parallel architectures
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
Perhaps the most critical problem in distributed simulation is that of mapping: without an effective mapping of workload to processors the speedup potential of parallel processing cannot be realized. Mapping a simulation onto a message-passing architecture is especially difficult when the computational workload dynamically changes as a function of time and space; this is exactly the situation faced by battlefield simulations. This paper studies an approach where the simulated battlefield domain is first partitioned into many regions of equal size; typically there are more regions than processors. The regions are then assigned to processors; a processor is responsible for performing all simulation activity associated with the regions. The assignment algorithm is quite simple and attempts to balance load by exploiting locality of workload intensity. The performance of this technique is studied on a simple battlefield simulation implemented on the Flex/32 multiprocessor. Measurements show that the proposed method achieves reasonable processor efficiencies. Furthermore, the method shows promise for use in dynamic remapping of the simulation.
An integrated algorithm for hypersonic fluid-thermal-structural numerical simulation
NASA Astrophysics Data System (ADS)
Li, Jia-Wei; Wang, Jiang-Feng
2018-05-01
In this paper, a fluid-structural-thermal integrated method is presented based on finite volume method. A unified integral equations system is developed as the control equations for physical process of aero-heating and structural heat transfer. The whole physical field is discretized by using an up-wind finite volume method. To demonstrate its capability, the numerical simulation of Mach 6.47 flow over stainless steel cylinder shows a good agreement with measured values, and this method dynamically simulates the objective physical processes. Thus, the integrated algorithm proves to be efficient and reliable.
Applications of the Lattice Boltzmann Method to Complex and Turbulent Flows
NASA Technical Reports Server (NTRS)
Luo, Li-Shi; Qi, Dewei; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We briefly review the method of the lattice Boltzmann equation (LBE). We show the three-dimensional LBE simulation results for a non-spherical particle in Couette flow and 16 particles in sedimentation in fluid. We compare the LBE simulation of the three-dimensional homogeneous isotropic turbulence flow in a periodic cubic box of the size 1283 with the pseudo-spectral simulation, and find that the two results agree well with each other but the LBE method is more dissipative than the pseudo-spectral method in small scales, as expected.
A new battery-charging method suggested by molecular dynamics simulations.
Abou Hamad, Ibrahim; Novotny, M A; Wipf, D O; Rikvold, P A
2010-03-20
Based on large-scale molecular dynamics simulations, we propose a new charging method that should be capable of charging a lithium-ion battery in a fraction of the time needed when using traditional methods. This charging method uses an additional applied oscillatory electric field. Our simulation results show that this charging method offers a great reduction in the average intercalation time for Li(+) ions, which dominates the charging time. The oscillating field not only increases the diffusion rate of Li(+) ions in the electrolyte but, more importantly, also enhances intercalation by lowering the corresponding overall energy barrier.
Simulations of 6-DOF Motion with a Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2003-01-01
Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.
Broadband impedance boundary conditions for the simulation of sound propagation in the time domain.
Bin, Jonghoon; Yousuff Hussaini, M; Lee, Soogab
2009-02-01
An accurate and practical surface impedance boundary condition in the time domain has been developed for application to broadband-frequency simulation in aeroacoustic problems. To show the capability of this method, two kinds of numerical simulations are performed and compared with the analytical/experimental results: one is acoustic wave reflection by a monopole source over an impedance surface and the other is acoustic wave propagation in a duct with a finite impedance wall. Both single-frequency and broadband-frequency simulations are performed within the framework of linearized Euler equations. A high-order dispersion-relation-preserving finite-difference method and a low-dissipation, low-dispersion Runge-Kutta method are used for spatial discretization and time integration, respectively. The results show excellent agreement with the analytical/experimental results at various frequencies. The method accurately predicts both the amplitude and the phase of acoustic pressure and ensures the well-posedness of the broadband time-domain impedance boundary condition.
On the simulation of indistinguishable fermions in the many-body Wigner formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@gmail.com; Dimov, I.
2015-01-01
The simulation of quantum systems consisting of interacting, indistinguishable fermions is an incredible mathematical problem which poses formidable numerical challenges. Many sophisticated methods addressing this problem are available which are based on the many-body Schrödinger formalism. Recently a Monte Carlo technique for the resolution of the many-body Wigner equation has been introduced and successfully applied to the simulation of distinguishable, spinless particles. This numerical approach presents several advantages over other methods. Indeed, it is based on an intuitive formalism in which quantum systems are described in terms of a quasi-distribution function, and highly scalable due to its Monte Carlo nature.more » In this work, we extend the many-body Wigner Monte Carlo method to the simulation of indistinguishable fermions. To this end, we first show how fermions are incorporated into the Wigner formalism. Then we demonstrate that the Pauli exclusion principle is intrinsic to the formalism. As a matter of fact, a numerical simulation of two strongly interacting fermions (electrons) is performed which clearly shows the appearance of a Fermi (or exchange–correlation) hole in the phase-space, a clear signature of the presence of the Pauli principle. To conclude, we simulate 4, 8 and 16 non-interacting fermions, isolated in a closed box, and show that, as the number of fermions increases, we gradually recover the Fermi–Dirac statistics, a clear proof of the reliability of our proposed method for the treatment of indistinguishable particles.« less
NASA Astrophysics Data System (ADS)
Rodrigues, Fabiano S.; de Paula, Eurico R.; Zewdie, Gebreab K.
2017-03-01
We present results of Capon's method for estimation of in-beam images of ionospheric scattering structures observed by a small, low-power coherent backscatter interferometer. The radar interferometer operated in the equatorial site of São Luís, Brazil (2.59° S, 44.21° W, -2.35° dip latitude). We show numerical simulations that evaluate the performance of the Capon method for typical F region measurement conditions. Numerical simulations show that, despite the short baselines of the São Luís radar, the Capon technique is capable of distinguishing localized features with kilometric scale sizes (in the zonal direction) at F region heights. Following the simulations, we applied the Capon algorithm to actual measurements made by the São Luís interferometer during a typical equatorial spread F (ESF) event. As indicated by the simulations, the Capon method produced images that were better resolved than those produced by the Fourier method. The Capon images show narrow (a few kilometers wide) scattering channels associated with ESF plumes and scattering regions spaced by only a few tens of kilometers in the zonal direction. The images are also capable of resolving bifurcations and the C shape of scattering structures.
Similar negative impacts of temperature on global wheat yield estimated by three independent methods
USDA-ARS?s Scientific Manuscript database
The potential impact of global temperature change on global wheat production has recently been assessed with different methods, scaling and aggregation approaches. Here we show that grid-based simulations, point-based simulations, and statistical regressions produce similar estimates of temperature ...
Heat simulation via Scilab programming
NASA Astrophysics Data System (ADS)
Hasan, Mohammad Khatim; Sulaiman, Jumat; Karim, Samsul Arifin Abdul
2014-07-01
This paper discussed the used of an open source sofware called Scilab to develop a heat simulator. In this paper, heat equation was used to simulate heat behavior in an object. The simulator was developed using finite difference method. Numerical experiment output show that Scilab can produce a good heat behavior simulation with marvellous visual output with only developing simple computer code.
Chung, In-Young; Jang, Hyeri; Lee, Jieun; Moon, Hyunggeun; Seo, Sung Min; Kim, Dae Hwan
2012-02-17
We introduce a simulation method for the biosensor environment which treats the semiconductor and the electrolyte region together, using the well-established semiconductor 3D TCAD simulator tool. Using this simulation method, we conduct electrostatic simulations of SiNW biosensors with a more realistic target charge model where the target is described as a charged cube, randomly located across the nanowire surface, and analyze the Coulomb effect on the SiNW FET according to the position and distribution of the target charges. The simulation results show the considerable variation in the SiNW current according to the bound target positions, and also the dependence of conductance modulation on the polarity of target charges. This simulation method and the results can be utilized for analysis of the properties and behavior of the biosensor device, such as the sensing limit or the sensing resolution.
Pilot-in-the-Loop CFD Method Development
2017-04-20
the methods on the NAVAIR Manned Flight Simulator. Activities this period During this report period, we implemented the CRAFT CFD code on the...Penn State VLRCROE Flight simulator and performed the first Pilot-in-the-Loop PILCFD tests at Penn State using the COCOA5 clusters. The initial tests...integration of the flight simulator and Penn State computing infrastructure. Initial tests showed slower performance than real-time (3x slower than real
2010-09-30
simulating violent free - surface flows , and show the importance of wave breaking in energy transport...using Eulerian simulation . 3 IMPACT/APPLICATION This project aims at developing an advanced simulation tool for multi-fluids free - surface flows that...several Eulerian and Lagrangian methods for free - surface turbulence and wave simulation . The WIND–SNOW is used to simulate 1 Report
Computational Simulations of the Lateral-Photovoltage-Scanning-Method
NASA Astrophysics Data System (ADS)
Kayser, S.; Lüdge, A.; Böttcher, K.
2018-05-01
The major task for the Lateral-Photovoltage-Scanning-Method is to detect doping striations and the shape of the solid-liquid-interface of an indirect semiconductor crystal. This method is sensitive to the gradient of the charge carrier density. Attempting to simulate the signal generation of the LPS-Method, we are using a three dimensional Finite Volume approach for solving the van Roosbroeck equations with COMSOL Multiphysics in a silicon sample. We show that the simulated LPS-voltage is directly proportional to the gradient of a given doping distribution, which is also the case for the measured LPS-voltage.
Optical simulation of flying targets using physically based renderer
NASA Astrophysics Data System (ADS)
Cheng, Ye; Zheng, Quan; Peng, Junkai; Lv, Pin; Zheng, Changwen
2018-02-01
The simulation of aerial flying targets is widely needed in many fields. This paper proposes a physically based method for optical simulation of flying targets. In the first step, three-dimensional target models are built and the motion speed and direction are defined. Next, the material of the outward appearance of a target is also simulated. Then the illumination conditions are defined. After all definitions are given, all settings are encoded in a description file. Finally, simulated results are generated by Monte Carlo ray tracing in a physically based renderer. Experiments show that this method is able to simulate materials, lighting and motion blur for flying targets, and it can generate convincing and highquality simulation results.
Methods of sound simulation and applications in flight simulators
NASA Technical Reports Server (NTRS)
Gaertner, K. P.
1980-01-01
An overview of methods for electronically synthesizing sounds is presented. A given amount of hardware and computer capacity places an upper limit on the degree and fidelity of realism of sound simulation which is attainable. Good sound realism for aircraft simulators can be especially expensive because of the complexity of flight sounds and their changing patterns through time. Nevertheless, the flight simulator developed at the Research Institute for Human Engineering, West Germany, shows that it is possible to design an inexpensive sound simulator with the required acoustic properties using analog computer elements. The characteristics of the sub-sound elements produced by this sound simulator for take-off, cruise and approach are discussed.
Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P
2016-01-01
Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.
NASA Astrophysics Data System (ADS)
Simon-Liedtke, Joschua T.; Farup, Ivar; Laeng, Bruno
2015-01-01
Color deficient people might be confronted with minor difficulties when navigating through daily life, for example when reading websites or media, navigating with maps, retrieving information from public transport schedules and others. Color deficiency simulation and daltonization methods have been proposed to better understand problems of color deficient individuals and to improve color displays for their use. However, it remains unclear whether these color prosthetic" methods really work and how well they improve the performance of color deficient individuals. We introduce here two methods to evaluate color deficiency simulation and daltonization methods based on behavioral experiments that are widely used in the field of psychology. Firstly, we propose a Sample-to-Match Simulation Evaluation Method (SaMSEM); secondly, we propose a Visual Search Daltonization Evaluation Method (ViSDEM). Both methods can be used to validate and allow the generalization of the simulation and daltonization methods related to color deficiency. We showed that both the response times (RT) and the accuracy of SaMSEM can be used as an indicator of the success of color deficiency simulation methods and that performance in the ViSDEM can be used as an indicator for the efficacy of color deficiency daltonization methods. In future work, we will include comparison and analysis of different color deficiency simulation and daltonization methods with the help of SaMSEM and ViSDEM.
Spatio-Temporal Process Simulation of Dam-Break Flood Based on SPH
NASA Astrophysics Data System (ADS)
Wang, H.; Ye, F.; Ouyang, S.; Li, Z.
2018-04-01
On the basis of introducing the SPH (Smooth Particle Hydrodynamics) simulation method, the key research problems were given solutions in this paper, which ere the spatial scale and temporal scale adapting to the GIS(Geographical Information System) application, the boundary condition equations combined with the underlying surface, and the kernel function and parameters applicable to dam-break flood simulation. In this regards, a calculation method of spatio-temporal process emulation with elaborate particles for dam-break flood was proposed. Moreover the spatio-temporal process was dynamic simulated by using GIS modelling and visualization. The results show that the method gets more information, objectiveness and real situations.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
On the structure of viscous flow about the afterbody of hull
NASA Astrophysics Data System (ADS)
Yoshida, Osamu; Zhu, Ming; Miyata, Hideaki
1993-09-01
A finite-volume method is applied to a flow about full ship models in the curvilinear coordinate system. Simulations are carried out for SR196 frame-line series. The simulated results show the difference of the wake and the longitudinal vorticity between the different hull forms. The comparisons between simulated and measured results show qualitative agreements in the wake distributions near the propeller disk circumference.
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-06-12
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
RCWA and FDTD modeling of light emission from internally structured OLEDs.
Callens, Michiel Koen; Marsman, Herman; Penninck, Lieven; Peeters, Patrick; de Groot, Harry; ter Meulen, Jan Matthijs; Neyts, Kristiaan
2014-05-05
We report on the fabrication and simulation of a green OLED with an Internal Light Extraction (ILE) layer. The optical behavior of these devices is simulated using both Rigorous Coupled Wave Analysis (RCWA) and Finite Difference Time-Domain (FDTD) methods. Results obtained using these two different techniques show excellent agreement and predict the experimental results with good precision. By verifying the validity of both simulation methods on the internal light extraction structure we pave the way to optimization of ILE layers using either of these methods.
NASA Astrophysics Data System (ADS)
Dou, Zhi-Wu
2010-08-01
To solve the inherent safety problem puzzling the coal mining industry, analyzing the characteristic and the application of distributed interactive simulation based on high level architecture (DIS/HLA), a new method is proposed for developing coal mining industry inherent safety distributed interactive simulation adopting HLA technology. Researching the function and structure of the system, a simple coal mining industry inherent safety is modeled with HLA, the FOM and SOM are developed, and the math models are suggested. The results of the instance research show that HLA plays an important role in developing distributed interactive simulation of complicated distributed system and the method is valid to solve the problem puzzling coal mining industry. To the coal mining industry, the conclusions show that the simulation system with HLA plays an important role to identify the source of hazard, to make the measure for accident, and to improve the level of management.
Steady and Unsteady Nozzle Simulations Using the Conservation Element and Solution Element Method
NASA Technical Reports Server (NTRS)
Friedlander, David Joshua; Wang, Xiao-Yen J.
2014-01-01
This paper presents results from computational fluid dynamic (CFD) simulations of a three-stream plug nozzle. Time-accurate, Euler, quasi-1D and 2D-axisymmetric simulations were performed as part of an effort to provide a CFD-based approach to modeling nozzle dynamics. The CFD code used for the simulations is based on the space-time Conservation Element and Solution Element (CESE) method. Steady-state results were validated using the Wind-US code and a code utilizing the MacCormack method while the unsteady results were partially validated via an aeroacoustic benchmark problem. The CESE steady-state flow field solutions showed excellent agreement with solutions derived from the other methods and codes while preliminary unsteady results for the three-stream plug nozzle are also shown. Additionally, a study was performed to explore the sensitivity of gross thrust computations to the control surface definition. The results showed that most of the sensitivity while computing the gross thrust is attributed to the control surface stencil resolution and choice of stencil end points and not to the control surface definition itself.Finally, comparisons between the quasi-1D and 2D-axisymetric solutions were performed in order to gain insight on whether a quasi-1D solution can capture the steady and unsteady nozzle phenomena without the cost of a 2D-axisymmetric simulation. Initial results show that while the quasi-1D solutions are similar to the 2D-axisymmetric solutions, the inability of the quasi-1D simulations to predict two dimensional phenomena limits its accuracy.
Large-scale expensive black-box function optimization
NASA Astrophysics Data System (ADS)
Rashid, Kashif; Bailey, William; Couët, Benoît
2012-09-01
This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.
Transient Macroscopic Chemistry in the DSMC Method
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.; Macrossan, M. N.; Abdel-Jawad, M.
2008-12-01
In the Direct Simulation Monte Carlo method, a combination of statistical and deterministic procedures applied to a finite number of `simulator' particles are used to model rarefied gas-kinetic processes. Traditionally, chemical reactions are modelled using information from specific colliding particle pairs. In the Macroscopic Chemistry Method (MCM), the reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell is used to determine a reaction rate coefficient for that cell. MCM has previously been applied to steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation and during the unsteady development of 2-D flow through a cavity. For the shock tube simulation, close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature and species mole fractions. For the cavity flow, a high degree of thermal non-equilibrium is present and non-equilibrium reaction rate correction factors are employed in MCM. Very close agreement is demonstrated for ensemble averaged mole fraction contours predicted by the particle and macroscopic methods at three different flow-times. A comparison of the accumulated number of net reactions per cell shows that both methods compute identical numbers of reaction events. For the 2-D flow, MCM required similar CPU and memory resources to the particle chemistry method. The Macroscopic Chemistry Method is applicable to any general DSMC code using any viscosity or non-reacting collision models and any non-reacting energy exchange models. MCM can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies.
Mathematic models for a ray tracing method and its applications in wireless optical communications.
Zhang, Minglun; Zhang, Yangan; Yuan, Xueguang; Zhang, Jinnan
2010-08-16
This paper presents a new ray tracing method, which contains a whole set of mathematic models, and its validity is verified by simulations. In addition, both theoretical analysis and simulation results show that the computational complexity of the method is much lower than that of previous ones. Therefore, the method can be used to rapidly calculate the impulse response of wireless optical channels for complicated systems.
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
Application of multi-grid method on the simulation of incremental forging processes
NASA Astrophysics Data System (ADS)
Ramadan, Mohamad; Khaled, Mahmoud; Fourment, Lionel
2016-10-01
Numerical simulation becomes essential in manufacturing large part by incremental forging processes. It is a splendid tool allowing to show physical phenomena however behind the scenes, an expensive bill should be paid, that is the computational time. That is why many techniques are developed to decrease the computational time of numerical simulation. Multi-Grid method is a numerical procedure that permits to reduce computational time of numerical calculation by performing the resolution of the system of equations on several mesh of decreasing size which allows to smooth faster the low frequency of the solution as well as its high frequency. In this paper a Multi-Grid method is applied to cogging process in the software Forge 3. The study is carried out using increasing number of degrees of freedom. The results shows that calculation time is divide by two for a mesh of 39,000 nodes. The method is promising especially if coupled with Multi-Mesh method.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Hybrid ODE/SSA methods and the cell cycle model
NASA Astrophysics Data System (ADS)
Wang, S.; Chen, M.; Cao, Y.
2017-07-01
Stochastic effect in cellular systems has been an important topic in systems biology. Stochastic modeling and simulation methods are important tools to study stochastic effect. Given the low efficiency of stochastic simulation algorithms, the hybrid method, which combines an ordinary differential equation (ODE) system with a stochastic chemically reacting system, shows its unique advantages in the modeling and simulation of biochemical systems. The efficiency of hybrid method is usually limited by reactions in the stochastic subsystem, which are modeled and simulated using Gillespie's framework and frequently interrupt the integration of the ODE subsystem. In this paper we develop an efficient implementation approach for the hybrid method coupled with traditional ODE solvers. We also compare the efficiency of hybrid methods with three widely used ODE solvers RADAU5, DASSL, and DLSODAR. Numerical experiments with three biochemical models are presented. A detailed discussion is presented for the performances of three ODE solvers.
A Kernel-Free Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 4
NASA Technical Reports Server (NTRS)
Park, Young-Keun; Fahrenthold, Eric P.
2004-01-01
An improved hybrid particle-finite element method has been developed for the simulation of hypervelocity impact problems. Unlike alternative methods, the revised formulation computes the density without reference to any kernel or interpolation functions, for either the density or the rate of dilatation. This simplifies the state space model and leads to a significant reduction in computational cost. The improved method introduces internal energy variables as generalized coordinates in a new formulation of the thermomechanical Lagrange equations. Example problems show good agreement with exact solutions in one dimension and good agreement with experimental data in a three dimensional simulation.
An unconditionally stable method for numerically solving solar sail spacecraft equations of motion
NASA Astrophysics Data System (ADS)
Karwas, Alex
Solar sails use the endless supply of the Sun's radiation to propel spacecraft through space. The sails use the momentum transfer from the impinging solar radiation to provide thrust to the spacecraft while expending zero fuel. Recently, the first solar sail spacecraft, or sailcraft, named IKAROS completed a successful mission to Venus and proved the concept of solar sail propulsion. Sailcraft experimental data is difficult to gather due to the large expenses of space travel, therefore, a reliable and accurate computational method is needed to make the process more efficient. Presented in this document is a new approach to simulating solar sail spacecraft trajectories. The new method provides unconditionally stable numerical solutions for trajectory propagation and includes an improved physical description over other methods. The unconditional stability of the new method means that a unique numerical solution is always determined. The improved physical description of the trajectory provides a numerical solution and time derivatives that are continuous throughout the entire trajectory. The error of the continuous numerical solution is also known for the entire trajectory. Optimal control for maximizing thrust is also provided within the framework of the new method. Verification of the new approach is presented through a mathematical description and through numerical simulations. The mathematical description provides details of the sailcraft equations of motion, the numerical method used to solve the equations, and the formulation for implementing the equations of motion into the numerical solver. Previous work in the field is summarized to show that the new approach can act as a replacement to previous trajectory propagation methods. A code was developed to perform the simulations and it is also described in this document. Results of the simulations are compared to the flight data from the IKAROS mission. Comparison of the two sets of data show that the new approach is capable of accurately simulating sailcraft motion. Sailcraft and spacecraft simulations are compared to flight data and to other numerical solution techniques. The new formulation shows an increase in accuracy over a widely used trajectory propagation technique. Simulations for two-dimensional, three-dimensional, and variable attitude trajectories are presented to show the multiple capabilities of the new technique. An element of optimal control is also part of the new technique. An additional equation is added to the sailcraft equations of motion that maximizes thrust in a specific direction. A technical description and results of an example optimization problem are presented. The spacecraft attitude dynamics equations take the simulation a step further by providing control torques using the angular rate and acceleration outputs of the numerical formulation.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
NASA Astrophysics Data System (ADS)
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Results of a search for deuterium at 25-50 GC/c using a magnetic spectrometer
NASA Technical Reports Server (NTRS)
Golden, R. L.; Stephens, S. A.; Webber, W. R.
1985-01-01
A method is presented for separately identifying isotopes using a Cerenkov detector and a magnet spectrometer. Simulations of the method are given for separating deuterium from protons. The simulations are compared with data gathered from the 1979 flight of the New Mexico State University balloonborne magnet spectrometer. The simulation and the data show the same general characteristics lending credence to the technique. The data show an apparent deuteron signal which is (11 + or - 3)% of the total sample in the rigidity region 38.5 to 50 GV/c. Until further background analysis and subtraction is performed this should be regarded as an upper limit to the deuteron/(deuteron+proton) ratio.
SLTCAP: A Simple Method for Calculating the Number of Ions Needed for MD Simulation.
Schmit, Jeremy D; Kariyawasam, Nilusha L; Needham, Vince; Smith, Paul E
2018-04-10
An accurate depiction of electrostatic interactions in molecular dynamics requires the correct number of ions in the simulation box to capture screening effects. However, the number of ions that should be added to the box is seldom given by the bulk salt concentration because a charged biomolecule solute will perturb the local solvent environment. We present a simple method for calculating the number of ions that requires only the total solute charge, solvent volume, and bulk salt concentration as inputs. We show that the most commonly used method for adding salt to a simulation results in an effective salt concentration that is too high. These findings are confirmed using simulations of lysozyme. We have established a web server where these calculations can be readily performed to aid simulation setup.
Calibration Method to Eliminate Zeroth Order Effect in Lateral Shearing Interferometry
NASA Astrophysics Data System (ADS)
Fang, Chao; Xiang, Yang; Qi, Keqi; Chen, Dawei
2018-04-01
In this paper, a calibration method is proposed which eliminates the zeroth order effect in lateral shearing interferometry. An analytical expression of the calibration error function is deduced, and the relationship between the phase-restoration error and calibration error is established. The analytical results show that the phase-restoration error introduced by the calibration error is proportional to the phase shifting error and zeroth order effect. The calibration method is verified using simulations and experiments. The simulation results show that the phase-restoration error is approximately proportional to the phase shift error and zeroth order effect, when the phase shifting error is less than 2° and the zeroth order effect is less than 0.2. The experimental result shows that compared with the conventional method with 9-frame interferograms, the calibration method with 5-frame interferograms achieves nearly the same restoration accuracy.
Maritime Search and Rescue via Multiple Coordinated UAS
2017-06-12
performed by a set of UAS. Our investigation covers the detection of multiple mobile objects by a heterogeneous collection of UAS. Three methods (two...account for contingencies such as airspace deconfliction. Results are produced using simulation to verify the capability of the proposed method and to...compare the various par- titioning methods . Results from this simulation show that great gains in search efficiency can be made when the search space is
Car-to-pedestrian collision reconstruction with injury as an evaluation index.
Weng, Yiliu; Jin, Xianlong; Zhao, Zhijie; Zhang, Xiaoyun
2010-07-01
Reconstruction of accidents is currently considered as a useful means in the analysis of accidents. By multi-body dynamics and numerical methods, and by adopting vehicle and pedestrian models, the scenario of the crash can often be simulated. When reconstructing the collisions, questions often arise regarding the criteria for the evaluation of simulation results. This paper proposes a reconstruction method for car-to-pedestrian collisions based on injuries of the pedestrians. In this method, pedestrian injury becomes a critical index in judging the correctness of the reconstruction result and guiding the simulation process. Application of this method to a real accident case is also presented in this paper. The study showed a good agreement between injuries obtained by numerical simulation and that by forensic identification. Copyright 2010 Elsevier Ltd. All rights reserved.
3D simulation of friction stir welding based on movable cellular automaton method
NASA Astrophysics Data System (ADS)
Eremina, Galina M.
2017-12-01
The paper is devoted to a 3D computer simulation of the peculiarities of material flow taking place in friction stir welding (FSW). The simulation was performed by the movable cellular automaton (MCA) method, which is a representative of particle methods in mechanics. Commonly, the flow of material in FSW is simulated based on computational fluid mechanics, assuming the material as continuum and ignoring its structure. The MCA method considers a material as an ensemble of bonded particles. The rupture of interparticle bonds and the formation of new bonds enable simulations of crack nucleation and healing as well as mas mixing and microwelding. The simulation results showed that using pins of simple shape (cylinder, cone, and pyramid) without a shoulder results in small displacements of plasticized material in workpiece thickness directions. Nevertheless, the optimal ratio of longitudinal velocity to rotational speed makes it possible to transport the welded material around the pin several times and to produce a joint of good quality.
[Application of ordinary Kriging method in entomologic ecology].
Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong
2003-01-01
Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.
NASA Astrophysics Data System (ADS)
Furuichi, M.; Nishiura, D.
2015-12-01
Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec
2015-06-01
A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less
Data-driven train set crash dynamics simulation
NASA Astrophysics Data System (ADS)
Tang, Zhao; Zhu, Yunrui; Nie, Yinyu; Guo, Shihui; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2017-02-01
Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force-displacement curves and predicts a force-displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.
Probabilistic composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
NASA Astrophysics Data System (ADS)
Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi
This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
NASA Astrophysics Data System (ADS)
Sultan, A. Z.; Hamzah, N.; Rusdi, M.
2018-01-01
The implementation of concept attainment method based on simulation was used to increase student’s interest in the subjects Engineering of Mechanics in second semester of academic year 2016/2017 in Manufacturing Engineering Program, Department of Mechanical PNUP. The result of the implementation of this learning method shows that there is an increase in the students’ learning interest towards the lecture material which is summarized in the form of interactive simulation CDs and teaching materials in the form of printed books and electronic books. From the implementation of achievement method of this simulation based concept, it is noted that the increase of student participation in the presentation and discussion as well as the deposit of individual assignment of significant student. With the implementation of this method of learning the average student participation reached 89%, which before the application of this learning method only reaches an average of 76%. And also with previous learning method, for exam achievement of A-grade under 5% and D-grade above 8%. After the implementation of the new learning method (simulation based-concept attainment method) the achievement of Agrade has reached more than 30% and D-grade below 1%.
Comparative study on gene set and pathway topology-based enrichment methods.
Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim
2015-10-22
Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both types of methods for enrichment analysis require further improvements in order to deal with the problem of pathway overlaps.
NASA Astrophysics Data System (ADS)
Woldegiorgis, Befekadu Taddesse; van Griensven, Ann; Pereira, Fernando; Bauwens, Willy
2017-06-01
Most common numerical solutions used in CSTR-based in-stream water quality simulators are susceptible to instabilities and/or solution inconsistencies. Usually, they cope with instability problems by adopting computationally expensive small time steps. However, some simulators use fixed computation time steps and hence do not have the flexibility to do so. This paper presents a novel quasi-analytical solution for CSTR-based water quality simulators of an unsteady system. The robustness of the new method is compared with the commonly used fourth-order Runge-Kutta methods, the Euler method and three versions of the SWAT model (SWAT2012, SWAT-TCEQ, and ESWAT). The performance of each method is tested for different hypothetical experiments. Besides the hypothetical data, a real case study is used for comparison. The growth factors we derived as stability measures for the different methods and the R-factor—considered as a consistency measure—turned out to be very useful for determining the most robust method. The new method outperformed all the numerical methods used in the hypothetical comparisons. The application for the Zenne River (Belgium) shows that the new method provides stable and consistent BOD simulations whereas the SWAT2012 model is shown to be unstable for the standard daily computation time step. The new method unconditionally simulates robust solutions. Therefore, it is a reliable scheme for CSTR-based water quality simulators that use first-order reaction formulations.
Modeling and simulating industrial land-use evolution in Shanghai, China
NASA Astrophysics Data System (ADS)
Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl
2018-01-01
This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.
A fast image simulation algorithm for scanning transmission electron microscopy.
Ophus, Colin
2017-01-01
Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.
Direct Lagrangian tracking simulations of particles in vertically-developing atmospheric clouds
NASA Astrophysics Data System (ADS)
Onishi, Ryo; Kunishima, Yuichi
2017-11-01
We have been developing the Lagrangian Cloud Simulator (LCS), which follows the so-called Euler-Lagrangian framework, where flow motion and scalar transportations (i.e., temperature and humidity) are computed with the Euler method and particle motion with the Lagrangian method. The LCS simulation considers the hydrodynamic interaction between approaching particles for robust collision detection. This leads to reliable simulations of collision growth of cloud droplets. Recently the activation process, in which aerosol particles become tiny liquid droplets, has been implemented in the LCS. The present LCS can therefore consider the whole warm-rain precipitation processes -activation, condensation, collision and drop precipitation. In this talk, after briefly introducing the LCS, we will show kinematic simulations using the LCS for quasi-one dimensional domain, i.e., vertically elongated 3D domain. They are compared with one-dimensional kinematic simulations using a spectral-bin cloud microphysics scheme, which is based on the Euler method. The comparisons show fairly good agreement with small discrepancies, the source of which will be presented. The Lagrangian statistics, obtained for the first time for the vertical domain, will be the center of discussion. This research was supported by MEXT as ``Exploratory Challenge on Post-K computer'' (Frontiers of Basic Science: Challenging the Limits).
Peter, Silvia; Modregger, Peter; Fix, Michael K.; Volken, Werner; Frei, Daniel; Manser, Peter; Stampanoni, Marco
2014-01-01
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging. PMID:24763652
Improved Density Functional Tight Binding Potentials for Metalloid Aluminum Clusters
2016-06-01
simulations of the oxidation of Al4Cp * 4 show reasonable comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers...comparison with a DFT-based Car -Parrinello method, including correct prediction of hydride transfers from Cp* to the metal centers during the...initio molecular dynamics of the oxidation of Al4Cp * 4 using a DFT-based Car -Parrinello method. This simulation, which 43 several months on the
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
Remapping dark matter halo catalogues between cosmological simulations
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.
2014-05-01
We present and test a method for modifying the catalogue of dark matter haloes produced from a given cosmological simulation, so that it resembles the result of a simulation with an entirely different set of parameters. This extends the method of Angulo & White, which rescales the full particle distribution from a simulation. Working directly with the halo catalogue offers an advantage in speed, and also allows modifications of the internal structure of the haloes to account for non-linear differences between cosmologies. Our method can be used directly on a halo catalogue in a self-contained manner without any additional information about the overall density field; although the large-scale displacement field is required by the method, this can be inferred from the halo catalogue alone. We show proof of concept of our method by rescaling a matter-only simulation with no baryon acoustic oscillation (BAO) features to a more standard Λ cold dark matter model containing a cosmological constant and a BAO signal. In conjunction with the halo occupation approach, this method provides a basis for the rapid generation of mock galaxy samples spanning a wide range of cosmological parameters.
Lu, Yehu; Wang, Faming; Peng, Hui
2016-07-01
The effect of sweating simulation methods on clothing evaporative resistance was investigated in a so-called isothermal condition (T manikin = T a = T r ). Two sweating simulation methods, namely, the pre-wetted fabric "skin" (PW) and the water supplied sweating (WS), were applied to determine clothing evaporative resistance on a "Newton" thermal manikin. Results indicated that the clothing evaporative resistance determined by the WS method was significantly lower than that measured by the PW method. In addition, the evaporative resistances measured by the two methods were correlated and exhibited a linear relationship. Validation experiments demonstrated that the empirical regression equation showed highly acceptable estimations. The study contributes to improving the accuracy of measurements of clothing evaporative resistance by means of a sweating manikin.
Lee, Kyung Eun; Lee, Seo Ho; Shin, Eun-Seok; Shim, Eun Bo
2017-06-26
Hemodynamic simulation for quantifying fractional flow reserve (FFR) is often performed in a patient-specific geometry of coronary arteries reconstructed from the images from various imaging modalities. Because optical coherence tomography (OCT) images can provide more precise vascular lumen geometry, regardless of stenotic severity, hemodynamic simulation based on OCT images may be effective. The aim of this study is to perform OCT-FFR simulations by coupling a 3D CFD model from geometrically correct OCT images with a LPM based on vessel lengths extracted from CAG data with clinical validations for the present method. To simulate coronary hemodynamics, we developed a fast and accurate method that combined a computational fluid dynamics (CFD) model of an OCT-based region of interest (ROI) with a lumped parameter model (LPM) of the coronary microvasculature and veins. Here, the LPM was based on vessel lengths extracted from coronary X-ray angiography (CAG) images. Based on a vessel length-based approach, we describe a theoretical formulation for the total resistance of the LPM from a three-dimensional (3D) CFD model of the ROI. To show the utility of this method, we present calculated examples of FFR from OCT images. To validate the OCT-based FFR calculation (OCT-FFR) clinically, we compared the computed OCT-FFR values for 17 vessels of 13 patients with clinically measured FFR (M-FFR) values. A novel formulation for the total resistance of LPM is introduced to accurately simulate a 3D CFD model of the ROI. The simulated FFR values compared well with clinically measured ones, showing the accuracy of the method. Moreover, the present method is fast in terms of computational time, enabling clinicians to provide solutions handled within the hospital.
Simulation of Rutherford backscattering spectrometry from arbitrary atom structures.
Zhang, S; Nordlund, K; Djurabekova, F; Zhang, Y; Velisa, G; Wang, T S
2016-10-01
Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.
Simulation of Rutherford backscattering spectrometry from arbitrary atom structures
NASA Astrophysics Data System (ADS)
Zhang, S.; Nordlund, K.; Djurabekova, F.; Zhang, Y.; Velisa, G.; Wang, T. S.
2016-10-01
Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop here a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms, Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.
Quadrature Moments Method for the Simulation of Turbulent Reactive Flows
NASA Technical Reports Server (NTRS)
Raman, Venkatramanan; Pitsch, Heinz; Fox, Rodney O.
2003-01-01
A sub-filter model for reactive flows, namely the DQMOM model, was formulated for Large Eddy Simulation (LES) using the filtered mass density function. Transport equations required to determine the location and size of the delta-peaks were then formulated for a 2-peak decomposition of the FDF. The DQMOM scheme was implemented in an existing structured-grid LES solver. Simulations of scalar shear layer using an experimental configuration showed that the first and second moments of both reactive and inert scalars are in good agreement with a conventional Lagrangian scheme that evolves the same FDF. Comparisons with LES simulations performed using laminar chemistry assumption for the reactive scalar show that the new method provides vast improvements at minimal computational cost. Currently, the DQMOM model is being implemented for use with the progress variable/mixture fraction model of Pierce. Comparisons with experimental results and LES simulations using a single-environment for the progress-variable are planned. Future studies will aim at understanding the effect of increase in environments on predictions.
The cost of conservative synchronization in parallel discrete event simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.; Berge, W. A.
1972-01-01
Manual flight control and emergency procedure task skill degradation was evaluated after time intervals of from 1 to 6 months. The tasks were associated with a simulated launch through the orbit insertion flight phase of a space vehicle. The results showed that acceptable flight control performance was retained for 2 months, rapidly deteriorating thereafter by a factor of 1.7 to 3.1 depending on the performance measure used. Procedural task performance showed unacceptable degradation after only 1 month, and exceeded an order of magnitude after 4 months. The effectiveness of static rehearsal (checklists and briefings) and dynamic warmup (simulator practice) retraining methods were compared for the two tasks. Static rehearsal effectively countered procedural skill degradation, while some combination of dynamic warmup appeared necessary for flight control skill retention. It was apparent that these differences between methods were not solely a function of task type or retraining method, but were a function of the performance measures used for each task.
An impact analysis of forecasting methods and forecasting parameters on bullwhip effect
NASA Astrophysics Data System (ADS)
Silitonga, R. Y. H.; Jelly, N.
2018-04-01
Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.
Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani
2004-03-01
The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.
Simulating the component counts of combinatorial structures.
Arratia, Richard; Barbour, A D; Ewens, W J; Tavaré, Simon
2018-02-09
This article describes and compares methods for simulating the component counts of random logarithmic combinatorial structures such as permutations and mappings. We exploit the Feller coupling for simulating permutations to provide a very fast method for simulating logarithmic assemblies more generally. For logarithmic multisets and selections, this approach is replaced by an acceptance/rejection method based on a particular conditioning relationship that represents the distribution of the combinatorial structure as that of independent random variables conditioned on a weighted sum. We show how to improve its acceptance rate. We illustrate the method by estimating the probability that a random mapping has no repeated component sizes, and establish the asymptotic distribution of the difference between the number of components and the number of distinct component sizes for a very general class of logarithmic structures. Copyright © 2018. Published by Elsevier Inc.
Comparative study of signalling methods for high-speed backplane transceiver
NASA Astrophysics Data System (ADS)
Wu, Kejun
2017-11-01
A combined analysis of transient simulation and statistical method is proposed for comparative study of signalling methods applied to high-speed backplane transceivers. This method enables fast and accurate signal-to-noise ratio and symbol error rate estimation of a serial link based on a four-dimension design space, including channel characteristics, noise scenarios, equalisation schemes, and signalling methods. The proposed combined analysis method chooses an efficient sampling size for performance evaluation. A comparative study of non-return-to-zero (NRZ), PAM-4, and four-phase shifted sinusoid symbol (PSS-4) using parameterised behaviour-level simulation shows PAM-4 and PSS-4 has substantial advantages over conventional NRZ in most of the cases. A comparison between PAM-4 and PSS-4 shows PAM-4 gets significant bit error rate degradation when noise level is enhanced.
Mapping Conformational Dynamics of Proteins Using Torsional Dynamics Simulations
Gangupomu, Vamshi K.; Wagner, Jeffrey R.; Park, In-Hee; Jain, Abhinandan; Vaidehi, Nagarajan
2013-01-01
All-atom molecular dynamics simulations are widely used to study the flexibility of protein conformations. However, enhanced sampling techniques are required for simulating protein dynamics that occur on the millisecond timescale. In this work, we show that torsional molecular dynamics simulations enhance protein conformational sampling by performing conformational search in the low-frequency torsional degrees of freedom. In this article, we use our recently developed torsional-dynamics method called Generalized Newton-Euler Inverse Mass Operator (GNEIMO) to study the conformational dynamics of four proteins. We investigate the use of the GNEIMO method in simulations of the conformationally flexible proteins fasciculin and calmodulin, as well as the less flexible crambin and bovine pancreatic trypsin inhibitor. For the latter two proteins, the GNEIMO simulations with an implicit-solvent model reproduced the average protein structural fluctuations and sample conformations similar to those from Cartesian simulations with explicit solvent. The application of GNEIMO with replica exchange to the study of fasciculin conformational dynamics produced sampling of two of this protein’s experimentally established conformational substates. Conformational transition of calmodulin from the Ca2+-bound to the Ca2+-free conformation occurred readily with GNEIMO simulations. Moreover, the GNEIMO method generated an ensemble of conformations that satisfy about half of both short- and long-range interresidue distances obtained from NMR structures of holo to apo transitions in calmodulin. Although unconstrained all-atom Cartesian simulations have failed to sample transitions between the substates of fasciculin and calmodulin, GNEIMO simulations show the transitions in both systems. The relatively short simulation times required to capture these long-timescale conformational dynamics indicate that GNEIMO is a promising molecular-dynamics technique for studying domain motion in proteins. PMID:23663843
Mapping conformational dynamics of proteins using torsional dynamics simulations.
Gangupomu, Vamshi K; Wagner, Jeffrey R; Park, In-Hee; Jain, Abhinandan; Vaidehi, Nagarajan
2013-05-07
All-atom molecular dynamics simulations are widely used to study the flexibility of protein conformations. However, enhanced sampling techniques are required for simulating protein dynamics that occur on the millisecond timescale. In this work, we show that torsional molecular dynamics simulations enhance protein conformational sampling by performing conformational search in the low-frequency torsional degrees of freedom. In this article, we use our recently developed torsional-dynamics method called Generalized Newton-Euler Inverse Mass Operator (GNEIMO) to study the conformational dynamics of four proteins. We investigate the use of the GNEIMO method in simulations of the conformationally flexible proteins fasciculin and calmodulin, as well as the less flexible crambin and bovine pancreatic trypsin inhibitor. For the latter two proteins, the GNEIMO simulations with an implicit-solvent model reproduced the average protein structural fluctuations and sample conformations similar to those from Cartesian simulations with explicit solvent. The application of GNEIMO with replica exchange to the study of fasciculin conformational dynamics produced sampling of two of this protein's experimentally established conformational substates. Conformational transition of calmodulin from the Ca(2+)-bound to the Ca(2+)-free conformation occurred readily with GNEIMO simulations. Moreover, the GNEIMO method generated an ensemble of conformations that satisfy about half of both short- and long-range interresidue distances obtained from NMR structures of holo to apo transitions in calmodulin. Although unconstrained all-atom Cartesian simulations have failed to sample transitions between the substates of fasciculin and calmodulin, GNEIMO simulations show the transitions in both systems. The relatively short simulation times required to capture these long-timescale conformational dynamics indicate that GNEIMO is a promising molecular-dynamics technique for studying domain motion in proteins. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A probabilistic approach to composite micromechanics
NASA Technical Reports Server (NTRS)
Stock, T. A.; Bellini, P. X.; Murthy, P. L. N.; Chamis, C. C.
1988-01-01
Probabilistic composite micromechanics methods are developed that simulate expected uncertainties in unidirectional fiber composite properties. These methods are in the form of computational procedures using Monte Carlo simulation. A graphite/epoxy unidirectional composite (ply) is studied to demonstrate fiber composite material properties at the micro level. Regression results are presented to show the relative correlation between predicted and response variables in the study.
Modelling and study of active vibration control for off-road vehicle
NASA Astrophysics Data System (ADS)
Zhang, Junwei; Chen, Sizhong
2014-05-01
In view of special working characteristics and structure, engineering machineries do not have conventional suspension system typically. Consequently, operators have to endure severe vibrations which are detrimental both to their health and to the productivity of the loader. Based on displacement control, a kind of active damping method is developed for a skid-steer loader. In this paper, the whole hydraulic system for active damping method is modelled which include swash plate dynamics model, proportional valve model, piston accumulator model, pilot-operated check valve model, relief valve model, pump loss model, and cylinder model. A new road excitation model is developed for the skid-steer loader specially. The response of chassis vibration acceleration to road excitation is verified through simulation. The simulation result of passive accumulator damping is compared with measurements and the comparison shows that they are close. Based on this, parallel PID controller and track PID controller with acceleration feedback are brought into the simulation model, and the simulation results are compared with passive accumulator damping. It shows that the active damping methods with PID controllers are better in reducing chassis vibration acceleration and pitch movement. In the end, the test work for active damping method is proposed for the future work.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
NASA Astrophysics Data System (ADS)
Ekici, Altug; Tjiputra, Jerry; Grini, Alf; Muri, Helene
2017-04-01
We have simulated 3 different radiation management geoengineering methods (CCT - cirrus cloud thinning; SAI - stratospheric aerosol injection; MSB - marine sky brightening) on top of future RCP8.5 scenario with the fully coupled Norwegian Earth System Model (NorESM). A globally consistent cooling in both atmosphere and soil is observed with all methods. However, precipitation patterns are dependent on the used method. Globally CCT and MSB methods do not affect the vegetation carbon budget, while SAI leads to a loss compared to RCP8.5 simulations. Spatially the most sensitive region is the tropics. Here, the changes in vegetation carbon content are related to the precipitation changes. Increase in soil carbon is projected in all three methods, the biggest change seen in SAI method. Simulations with CCT method leads to twice as much soil carbon retention in the tropics compared to the MSB method. Our findings show that there are unforeseen regional consequences of such geoengineering methods in the biogeochemical cycles and they should be considered with care in future climate policies.
Stiegler, Marjorie; Hobbs, Gene; Martinelli, Susan M; Zvara, David; Arora, Harendra; Chen, Fei
2018-01-01
Background Simulation is an effective method for creating objective summative assessments of resident trainees. Real-time assessment (RTA) in simulated patient care environments is logistically challenging, especially when evaluating a large group of residents in multiple simulation scenarios. To date, there is very little data comparing RTA with delayed (hours, days, or weeks later) video-based assessment (DA) for simulation-based assessments of Accreditation Council for Graduate Medical Education (ACGME) sub-competency milestones. We hypothesized that sub-competency milestone evaluation scores obtained from DA, via audio-video recordings, are equivalent to the scores obtained from RTA. Methods Forty-one anesthesiology residents were evaluated in three separate simulated scenarios, representing different ACGME sub-competency milestones. All scenarios had one faculty member perform RTA and two additional faculty members perform DA. Subsequently, the scores generated by RTA were compared with the average scores generated by DA. Variance component analysis was conducted to assess the amount of variation in scores attributable to residents and raters. Results Paired t-tests showed no significant difference in scores between RTA and averaged DA for all cases. Cases 1, 2, and 3 showed an intraclass correlation coefficient (ICC) of 0.67, 0.85, and 0.50 for agreement between RTA scores and averaged DA scores, respectively. Analysis of variance of the scores assigned by the three raters showed a small proportion of variance attributable to raters (4% to 15%). Conclusions The results demonstrate that video-based delayed assessment is as reliable as real-time assessment, as both assessment methods yielded comparable scores. Based on a department’s needs or logistical constraints, our findings support the use of either real-time or delayed video evaluation for assessing milestones in a simulated patient care environment. PMID:29736352
Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method
NASA Astrophysics Data System (ADS)
Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.
Krylov subspace methods for computing hydrodynamic interactions in Brownian dynamics simulations
Ando, Tadashi; Chow, Edmond; Saad, Yousef; Skolnick, Jeffrey
2012-01-01
Hydrodynamic interactions play an important role in the dynamics of macromolecules. The most common way to take into account hydrodynamic effects in molecular simulations is in the context of a Brownian dynamics simulation. However, the calculation of correlated Brownian noise vectors in these simulations is computationally very demanding and alternative methods are desirable. This paper studies methods based on Krylov subspaces for computing Brownian noise vectors. These methods are related to Chebyshev polynomial approximations, but do not require eigenvalue estimates. We show that only low accuracy is required in the Brownian noise vectors to accurately compute values of dynamic and static properties of polymer and monodisperse suspension models. With this level of accuracy, the computational time of Krylov subspace methods scales very nearly as O(N2) for the number of particles N up to 10 000, which was the limit tested. The performance of the Krylov subspace methods, especially the “block” version, is slightly better than that of the Chebyshev method, even without taking into account the additional cost of eigenvalue estimates required by the latter. Furthermore, at N = 10 000, the Krylov subspace method is 13 times faster than the exact Cholesky method. Thus, Krylov subspace methods are recommended for performing large-scale Brownian dynamics simulations with hydrodynamic interactions. PMID:22897254
Phased-array vector velocity estimation using transverse oscillations.
Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A
2012-12-01
A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.
Zonal methods for the parallel execution of range-limited N-body simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.
2007-01-20
Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less
Low-Density Nozzle Flow by the Direct Simulation Monte Carlo and Continuum Methods
NASA Technical Reports Server (NTRS)
Chung, Chang-Hong; Kim, Sku C.; Stubbs, Robert M.; Dewitt, Kenneth J.
1994-01-01
Two different approaches, the direct simulation Monte Carlo (DSMC) method based on molecular gasdynamics, and a finite-volume approximation of the Navier-Stokes equations, which are based on continuum gasdynamics, are employed in the analysis of a low-density gas flow in a small converging-diverging nozzle. The fluid experiences various kinds of flow regimes including continuum, slip, transition, and free-molecular. Results from the two numerical methods are compared with Rothe's experimental data, in which density and rotational temperature variations along the centerline and at various locations inside a low-density nozzle were measured by the electron-beam fluorescence technique. The continuum approach showed good agreement with the experimental data as far as density is concerned. The results from the DSMC method showed good agreement with the experimental data, both in the density and the rotational temperature. It is also shown that the simulation parameters, such as the gas/surface interaction model, the energy exchange model between rotational and translational modes, and the viscosity-temperature exponent, have substantial effects on the results of the DSMC method.
Deformation effect simulation and optimization for double front axle steering mechanism
NASA Astrophysics Data System (ADS)
Wu, Jungang; Zhang, Siqin; Yang, Qinglong
2013-03-01
This paper research on tire wear problem of heavy vehicles with Double Front Axle Steering Mechanism from the flexible effect of Steering Mechanism, and proposes a structural optimization method which use both traditional static structural theory and dynamic structure theory - Equivalent Static Load (ESL) method to optimize key parts. The good simulated and test results show this method has high engineering practice and reference value for tire wear problem of Double Front Axle Steering Mechanism design.
Coniferous canopy BRF simulation based on 3-D realistic scene.
Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing
2011-09-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.
Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene
NASA Technical Reports Server (NTRS)
Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing
2011-01-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.
Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.
Beentjes, Casper H L; Baker, Ruth E
2018-05-25
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
NASA Astrophysics Data System (ADS)
Chen, Xihui; Sun, Zhigang; Sun, Jianfen; Song, Yingdong
2017-12-01
In this paper, a numerical model which incorporates the oxidation damage model and the finite element model of 2D plain woven composites is presented for simulation of the oxidation behaviors of 2D plain woven C/SiC composite under preloading oxidation atmosphere. The equal proportional reduction method is firstly proposed to calculate the residual moduli and strength of unidirectional C/SiC composite. The multi-scale method is developed to simulate the residual elastic moduli and strength of 2D plain woven C/SiC composite. The multi-scale method is able to accurately predict the residual elastic modulus and strength of the composite. Besides, the simulated residual elastic moduli and strength of 2D plain woven C/SiC composites under preloading oxidation atmosphere show good agreements with experimental results. Furthermore, the preload, oxidation time, temperature and fiber volume fractions of the composite are investigated to show their influences upon the residual elastic modulus and strength of 2D plain woven C/SiC composites.
Laser Doppler pulp vitality measurements: simulation and measurement
NASA Astrophysics Data System (ADS)
Ertl, T.
2017-02-01
Frequently pulp vitality measurement is done in a dental practice by pressing a frozen cotton pellet on the tooth. This method is subjective, as the patient's response is required, sometimes painful and has moderate sensitivity and specificity. Other methods, based on optical or electrical measurement have been published, but didńt find wide spread application in the dental offices. Laser Doppler measurement of the blood flow in the pulp could be an objective method to measure pulp vitality, but the influence of the gingival blood flow on the measurements is a concern. Therefore experiments and simulations were done to learn more about the gingival blood flow in relation to the pulpal blood flow and how to minimize the influence. First patient measurements were done to show the feasibility clinically. Results: Monte Carlo simulations and bench experiments simulating the blood flow in and around a tooth show that both basic configurations, transmission and reflection measurements are possible. Most favorable is a multi-point measurement with different distances from the gingiva. Preliminary sensitivity / specificity are promising and might allow an objective and painless measurement of tooth vitality.
Zhang, Shunqi; Yin, Tao; Ma, Ren; Liu, Zhipeng
2015-08-01
Functional imaging method of biological electrical characteristics based on magneto-acoustic effect gives valuable information of tissue in early tumor diagnosis, therein time and frequency characteristics analysis of magneto-acoustic signal is important in image reconstruction. This paper proposes wave summing method based on Green function solution for acoustic source of magneto-acoustic effect. Simulations and analysis under quasi 1D transmission condition are carried out to time and frequency characteristics of magneto-acoustic signal of models with different thickness. Simulation results of magneto-acoustic signal were verified through experiments. Results of the simulation with different thickness showed that time-frequency characteristics of magneto-acoustic signal reflected thickness of sample. Thin sample, which is less than one wavelength of pulse, and thick sample, which is larger than one wavelength, showed different summed waveform and frequency characteristics, due to difference of summing thickness. Experimental results verified theoretical analysis and simulation results. This research has laid a foundation for acoustic source and conductivity reconstruction to the medium with different thickness in magneto-acoustic imaging.
Zhang, Man; Zhou, Zhuhuang; Wu, Shuicai; Lin, Lan; Gao, Hongjian; Feng, Yusheng
2015-12-21
This study aims at improving the accuracy of temperature simulation for temperature-controlled radio frequency ablation (RFA). We proposed a new voltage-calibration method in the simulation and investigated the feasibility of a hyperbolic bioheat equation (HBE) in the RFA simulation with longer durations and higher power. A total of 40 RFA experiments was conducted in a liver-mimicking phantom. Four mathematical models with multipolar electrodes were developed by the finite element method in COMSOL software: HBE with/without voltage calibration, and the Pennes bioheat equation (PBE) with/without voltage calibration. The temperature-varied voltage calibration used in the simulation was calculated from an experimental power output and temperature-dependent resistance of liver tissue. We employed the HBE in simulation by considering the delay time τ of 16 s. First, for simulations by each kind of bioheat equation (PBE or HBE), we compared the differences between the temperature-varied voltage-calibration and the fixed-voltage values used in the simulations. Then, the comparisons were conducted between the PBE and the HBE in the simulations with temperature-varied voltage calibration. We verified the simulation results by experimental temperature measurements on nine specific points of the tissue phantom. The results showed that: (1) the proposed voltage-calibration method improved the simulation accuracy of temperature-controlled RFA for both the PBE and the HBE, and (2) for temperature-controlled RFA simulation with the temperature-varied voltage calibration, the HBE method was 0.55 °C more accurate than the PBE method. The proposed temperature-varied voltage calibration may be useful in temperature field simulations of temperature-controlled RFA. Besides, the HBE may be used as an alternative in the simulation of long-duration high-power RFA.
An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform
NASA Astrophysics Data System (ADS)
Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang
2018-06-01
This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.
Chen, Peng; Zhang, Jiquan; Sun, Yingyue; Liu, Xiaojing
2016-01-01
Urban waterlogging seriously threatens the safety of urban residents and properties. Wargame simulation research on resident emergency evacuation from waterlogged areas can determine the effectiveness of emergency response plans for high risk events at low cost. Based on wargame theory and emergency evacuation plans, we used a wargame exercise method, incorporating qualitative and quantitative aspects, to build an urban waterlogging disaster emergency shelter using a wargame exercise and evaluation model. The simulation was empirically tested in Daoli District of Harbin. The results showed that the wargame simulation scored 96.40 points, evaluated as good. From the simulation results, wargame simulation of urban waterlogging emergency procedures for disaster response can improve the flexibility and capacity for command, management and decision-making in emergency management departments. PMID:28009805
Simulation of unsteady flows by the DSMC macroscopic chemistry method
NASA Astrophysics Data System (ADS)
Goldsworthy, Mark; Macrossan, Michael; Abdel-jawad, Madhat
2009-03-01
In the Direct Simulation Monte-Carlo (DSMC) method, a combination of statistical and deterministic procedures applied to a finite number of 'simulator' particles are used to model rarefied gas-kinetic processes. In the macroscopic chemistry method (MCM) for DSMC, chemical reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell, not just those selected for collisions, is used to determine a reaction rate coefficient for that cell. Unlike collision-based methods, MCM can be used with any viscosity or non-reacting collision models and any non-reacting energy exchange models. It can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies. MCM has been previously validated for steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation. Close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature, density and species mole fractions, as well as for the accumulated number of net reactions per cell.
Heikkilä, Janne; Hynynen, Kullervo
2006-04-01
Many noninvasive ultrasound techniques have been developed to explore mechanical properties of soft tissues. One of these methods, Localized Harmonic Motion Imaging (LHMI), has been proposed to be used for ultrasound surgery monitoring. In LHMI, dynamic ultrasound radiation-force stimulation induces displacements in a target that can be measured using pulse-echo imaging and used to estimate the elastic properties of the target. In this initial, simulation study, the use of a one-dimensional phased array is explored for the induction of the tissue motion. The study compares three different dual-frequency and amplitude-modulated single-frequency methods for the inducing tissue motion. Simulations were computed in a homogeneous soft-tissue volume. The Rayleigh integral was used in the simulations of the ultrasound fields and the tissue displacements were computed using a finite-element method (FEM). The simulations showed that amplitude-modulated sonication using a single frequency produced the largest vibration amplitude of the target tissue. These simulations demonstrate that the properties of the tissue motion are highly dependent on the sonication method and that it is important to consider the full three-dimensional distribution of the ultrasound field for controlling the induction of tissue motion.
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Research on radiation characteristic of plasma antenna through FDTD method.
Zhou, Jianming; Fang, Jingjing; Lu, Qiuyuan; Liu, Fan
2014-01-01
The radiation characteristic of plasma antenna is investigated by using the finite-difference time-domain (FDTD) approach in this paper. Through using FDTD method, we study the propagation of electromagnetic wave in free space in stretched coordinate. And the iterative equations of Maxwell equation are derived. In order to validate the correctness of this method, we simulate the process of electromagnetic wave propagating in free space. Results show that electromagnetic wave spreads out around the signal source and can be absorbed by the perfectly matched layer (PML). Otherwise, we study the propagation of electromagnetic wave in plasma by using the Boltzmann-Maxwell theory. In order to verify this theory, the whole process of electromagnetic wave propagating in plasma under one-dimension case is simulated. Results show that Boltzmann-Maxwell theory can be used to explain the phenomenon of electromagnetic wave propagating in plasma. Finally, the two-dimensional simulation model of plasma antenna is established under the cylindrical coordinate. And the near-field and far-field radiation pattern of plasma antenna are obtained. The experiments show that the variation of electron density can introduce the change of radiation characteristic.
Fast animation of lightning using an adaptive mesh.
Kim, Theodore; Lin, Ming C
2007-01-01
We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Roggemann, M C; Welsh, B M; Montera, D; Rhoadarmer, T A
1995-07-10
Simulating the effects of atmospheric turbulence on optical imaging systems is an important aspect of understanding the performance of these systems. Simulations are particularly important for understanding the statistics of some adaptive-optics system performance measures, such as the mean and variance of the compensated optical transfer function, and for understanding the statistics of estimators used to reconstruct intensity distributions from turbulence-corrupted image measurements. Current methods of simulating the performance of these systems typically make use of random phase screens placed in the system pupil. Methods exist for making random draws of phase screens that have the correct spatial statistics. However, simulating temporal effects and anisoplanatism requires one or more phase screens at different distances from the aperture, possibly moving with different velocities. We describe and demonstrate a method for creating random draws of phase screens with the correct space-time statistics for a bitrary turbulence and wind-velocity profiles, which can be placed in the telescope pupil in simulations. Results are provided for both the von Kármán and the Kolmogorov turbulence spectra. We also show how to simulate anisoplanatic effects with this technique.
A fast exact simulation method for a class of Markov jump processes.
Li, Yao; Hu, Lili
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.
Direct Harmonic Linear Navier-Stokes Methods for Efficient Simulation of Wave Packets
NASA Technical Reports Server (NTRS)
Streett, C. L.
1998-01-01
Wave packets produced by localized disturbances play an important role in transition in three-dimensional boundary layers, such as that on a swept wing. Starting with the receptivity process, we show the effects of wave-space energy distribution on the development of packets and other three-dimensional disturbance patterns. Nonlinearity in the receptivity process is specifically addressed, including demonstration of an effect which can enhance receptivity of traveling crossflow disturbances. An efficient spatial numerical simulation method is allowing most of the simulations presented to be carried out on a workstation.
Numerical Simulation of Two Dimensional Flows in Yazidang Reservoir
NASA Astrophysics Data System (ADS)
Huang, Lingxiao; Liu, Libo; Sun, Xuehong; Zheng, Lanxiang; Jing, Hefang; Zhang, Xuande; Li, Chunguang
2018-01-01
This paper studied the problem of water flow in the Yazid Ang reservoir. It built 2-D RNG turbulent model, rated the boundary conditions, used the finite volume method to discrete equations and divided the grid by the advancing-front method. It simulated the two conditions of reservoir flow field, compared the average vertical velocity of the simulated value and the measured value nearby the water inlet and the water intake. The results showed that the mathematical model could be applied to the similar industrial water reservoir.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Simulation of Hypervelocity Impact on Aluminum-Nextel-Kevlar Orbital Debris Shields
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
2000-01-01
An improved hybrid particle-finite element method has been developed for hypervelocity impact simulation. The method combines the general contact-impact capabilities of particle codes with the true Lagrangian kinematics of large strain finite element formulations. Unlike some alternative schemes which couple Lagrangian finite element models with smooth particle hydrodynamics, the present formulation makes no use of slidelines or penalty forces. The method has been implemented in a parallel, three dimensional computer code. Simulations of three dimensional orbital debris impact problems using this parallel hybrid particle-finite element code, show good agreement with experiment and good speedup in parallel computation. The simulations included single and multi-plate shields as well as aluminum and composite shielding materials. at an impact velocity of eleven kilometers per second.
NASA Astrophysics Data System (ADS)
Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping
2015-01-01
As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
First-principles simulations of heat transport
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
2017-11-01
Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.
Simulation Study of Effects of the Blind Deconvolution on Ultrasound Image
NASA Astrophysics Data System (ADS)
He, Xingwu; You, Junchen
2018-03-01
Ultrasonic image restoration is an essential subject in Medical Ultrasound Imaging. However, without enough and precise system knowledge, some traditional image restoration methods based on the system prior knowledge often fail to improve the image quality. In this paper, we use the simulated ultrasound image to find the effectiveness of the blind deconvolution method for ultrasound image restoration. Experimental results demonstrate that the blind deconvolution method can be applied to the ultrasound image restoration and achieve the satisfactory restoration results without the precise prior knowledge, compared with the traditional image restoration method. And with the inaccurate small initial PSF, the results shows blind deconvolution could improve the overall image quality of ultrasound images, like much better SNR and image resolution, and also show the time consumption of these methods. it has no significant increasing on GPU platform.
On the simulation and mitigation of anisoplanatic optical turbulence for long range imaging
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; LeMaster, Daniel A.
2017-05-01
We describe a numerical wave propagation method for simulating long range imaging of an extended scene under anisoplanatic conditions. Our approach computes an array of point spread functions (PSFs) for a 2D grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. To validate the simulation we compare simulated outputs with the theoretical anisoplanatic tilt correlation and differential tilt variance. This is in addition to comparing the long- and short-exposure PSFs, and isoplanatic angle. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. The simulation tool is also used here to quantitatively evaluate a recently proposed block- matching and Wiener filtering (BMWF) method for turbulence mitigation. In this method block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged and processed with a Wiener filter for restoration. A novel aspect of the proposed BMWF method is that the PSF model used for restoration takes into account the level of geometric correction achieved during image registration. This way, the Wiener filter is able fully exploit the reduced blurring achieved by registration. The BMWF method is relatively simple computationally, and yet, has excellent performance in comparison to state-of-the-art benchmark methods.
Incremental update of electrostatic interactions in adaptively restrained particle simulations.
Edorh, Semeho Prince A; Redon, Stéphane
2018-04-06
The computation of long-range potentials is one of the demanding tasks in Molecular Dynamics. During the last decades, an inventive panoply of methods was developed to reduce the CPU time of this task. In this work, we propose a fast method dedicated to the computation of the electrostatic potential in adaptively restrained systems. We exploit the fact that, in such systems, only some particles are allowed to move at each timestep. We developed an incremental algorithm derived from a multigrid-based alternative to traditional Fourier-based methods. Our algorithm was implemented inside LAMMPS, a popular molecular dynamics simulation package. We evaluated the method on different systems. We showed that the new algorithm's computational complexity scales with the number of active particles in the simulated system, and is able to outperform the well-established Particle Particle Particle Mesh (P3M) for adaptively restrained simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations
NASA Astrophysics Data System (ADS)
Ahn, Kyungjin
2015-02-01
We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.
Relaxation mode analysis of a peptide system: comparison with principal component analysis.
Mitsutake, Ayori; Iijima, Hiromitsu; Takano, Hiroshi
2011-10-28
This article reports the first attempt to apply the relaxation mode analysis method to a simulation of a biomolecular system. In biomolecular systems, the principal component analysis is a well-known method for analyzing the static properties of fluctuations of structures obtained by a simulation and classifying the structures into some groups. On the other hand, the relaxation mode analysis has been used to analyze the dynamic properties of homopolymer systems. In this article, a long Monte Carlo simulation of Met-enkephalin in gas phase has been performed. The results are analyzed by the principal component analysis and relaxation mode analysis methods. We compare the results of both methods and show the effectiveness of the relaxation mode analysis.
A method to reproduce alpha-particle spectra measured with semiconductor detectors.
Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín
2010-01-01
A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.
Nursing students' perceptions of high- and low-fidelity simulation used as learning methods.
Tosterud, Randi; Hedelin, Birgitta; Hall-Lord, Marie Louise
2013-07-01
Due to the increasing focus on simulation used in nursing education, there is a need to examine how the scenarios and different simulation methods used are perceived by students. The aim of this study was to examine nursing students' perceptions of scenarios played out in different simulation methods, and whether their educational level influenced their perception. The study had a quantitative, evaluative and comparative design. The sample consisted of baccalaureate nursing students (n = 86) within various educational levels. The students were randomly divided into groups. They solved a patient case adapted to their educational level by using a high-fidelity patient simulator, a static mannequin or a paper/pencil case study. Data were collected by three instruments developed by the National League for Nursing. The results showed that the nursing students reported satisfaction with the implementation of the scenarios regardless of the simulation methods used. The findings indicated that the students who used the paper/pencil case study were the most satisfied. Moreover, educational level did not seem to influence their perceptions. Independent of educational level, the findings indicated that simulation with various degrees of fidelity could be used in nursing education. There is a need for further research to examine more closely the rationale behind the students' perception of the simulation methods. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Paulk, C. H., Jr.; Astill, D. L.; Donley, S. T.
1983-01-01
The operation of the SH-2F helicopter from the decks of small ships in adverse weather was simulated using a large amplitude vertical motion simulator, a wide angle computer generated imagery visual system, and an interchangeable cab (ICAB). The simulation facility, the mathematical programs, and the validation method used to ensure simulation fidelity are described. The results show the simulator to be a useful tool in simulating the ship-landing problem. Characteristics of the ICAB system and ways in which the simulation can be improved are presented.
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Speeding up N-body simulations of modified gravity: chameleon screening models
NASA Astrophysics Data System (ADS)
Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo
2017-02-01
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.
Tools and Equipment Modeling for Automobile Interactive Assembling Operating Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Dianliang; Zhu Hongmin; Shanghai Key Laboratory of Advance Manufacturing Environment
Tools and equipment play an important role in the simulation of virtual assembly, especially in the assembly process simulation and plan. Because of variety in function and complexity in structure and manipulation, the simulation of tools and equipments remains to be a challenge for interactive assembly operation. Based on analysis of details and characteristics of interactive operations for automobile assembly, the functional requirement for tools and equipments of automobile assembly is given. Then, a unified modeling method for information expression and function realization of general tools and equipments is represented, and the handling methods of manual, semi-automatic, automatic tools andmore » equipments are discussed. Finally, the application in assembly simulation of rear suspension and front suspension of Roewe 750 automobile is given. The result shows that the modeling and handling methods are applicable in the interactive simulation of various tools and equipments, and can also be used for supporting assembly process planning in virtual environment.« less
Thermal lattice BGK models for fluid dynamics
NASA Astrophysics Data System (ADS)
Huang, Jian
1998-11-01
As an alternative in modeling fluid dynamics, the Lattice Boltzmann method has attracted considerable attention. In this thesis, we shall present a general form of thermal Lattice BGK. This form can handle large differences in density, temperature, and high Mach number. This generalized method can easily model gases with different adiabatic index values. The numerical transport coefficients of this model are estimated both theoretically and numerically. Their dependency on the sizes of integration steps in time and space, and on the flow velocity and temperature, are studied and compared with other established CFD methods. This study shows that the numerical viscosity of the Lattice Boltzmann method depends linearly on the space interval, and on the flow velocity as well for supersonic flow. This indicates this method's limitation in modeling high Reynolds number compressible thermal flow. On the other hand, the Lattice Boltzmann method shows promise in modeling micro-flows, i.e., gas flows in micron-sized devices. A two-dimensional code has been developed based on the conventional thermal lattice BGK model, with some modifications and extensions for micro- flows and wall-fluid interactions. Pressure-driven micro- channel flow has been simulated. Results are compared with experiments and simulations using other methods, such as a spectral element code using slip boundary condition with Navier-Stokes equations and a Direct Simulation Monte Carlo (DSMC) method.
Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till
2018-02-01
(1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Cluster Correspondence Analysis.
van de Velden, M; D'Enza, A Iodice; Palumbo, F
2017-03-01
A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.
Convergence studies in meshfree peridynamic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seleson, Pablo; Littlewood, David J.
2016-04-15
Meshfree methods are commonly applied to discretize peridynamic models, particularly in numerical simulations of engineering problems. Such methods discretize peridynamic bodies using a set of nodes with characteristic volume, leading to particle-based descriptions of systems. In this article, we perform convergence studies of static peridynamic problems. We show that commonly used meshfree methods in peridynamics suffer from accuracy and convergence issues, due to a rough approximation of the contribution to the internal force density of nodes near the boundary of the neighborhood of a given node. We propose two methods to improve meshfree peridynamic simulations. The first method uses accuratemore » computations of volumes of intersections between neighbor cells and the neighborhood of a given node, referred to as partial volumes. The second method employs smooth influence functions with a finite support within peridynamic kernels. Numerical results demonstrate great improvements in accuracy and convergence of peridynamic numerical solutions, when using the proposed methods.« less
A beam hardening and dispersion correction for x-ray dark-field radiography.
Pelzer, Georg; Anton, Gisela; Horn, Florian; Rieger, Jens; Ritter, André; Wandner, Johannes; Weber, Thomas; Michel, Thilo
2016-06-01
X-ray dark-field imaging promises information on the small angle scattering properties even of large samples. However, the dark-field image is correlated with the object's attenuation and phase-shift if a polychromatic x-ray spectrum is used. A method to remove part of these correlations is proposed. The experimental setup for image acquisition was modeled in a wave-field simulation to quantify the dark-field signals originating solely from a material's attenuation and phase-shift. A calibration matrix was simulated for ICRU46 breast tissue. Using the simulated data, a dark-field image of a human mastectomy sample was corrected for the finger print of attenuation- and phase-image. Comparing the simulated, attenuation-based dark-field values to a phantom measurement, a good agreement was found. Applying the proposed method to mammographic dark-field data, a reduction of the dark-field background and anatomical noise was achieved. The contrast between microcalcifications and their surrounding background was increased. The authors show that the influence of and dispersion can be quantified by simulation and, thus, measured image data can be corrected. The simulation allows to determine the corresponding dark-field artifacts for a wide range of setup parameters, like tube-voltage and filtration. The application of the proposed method to mammographic dark-field data shows an increase in contrast compared to the original image, which might simplify a further image-based diagnosis.
The convolutional differentiator method for numerical modelling of acoustic and elastic wavefields
NASA Astrophysics Data System (ADS)
Zhang, Zhong-Jie; Teng, Ji-Wen; Yang, Ding-Hui
1996-02-01
Based on the techniques of forward and inverse Fourier transformation, the authors discussed the design scheme of ordinary differentiator used and applied in the simulation of acoustic and elastic wavefields in isotropic media respectively. To compress Gibbs effects by truncation effectively, Hanning window is introduced in. The model computation shows that, the convolutional differentiator method has the advantages of rapidity, low requirements of computer’s inner storage and high precision, which is a potential method of numerical simulation.
Analysis of drift correction in different simulated weighing schemes
NASA Astrophysics Data System (ADS)
Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.
2015-10-01
In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.
A data-driven dynamics simulation framework for railway vehicles
NASA Astrophysics Data System (ADS)
Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun
2018-03-01
The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2017-01-01
Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.
A simple mass-conserved level set method for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
NASA Astrophysics Data System (ADS)
Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei
2018-03-01
Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.
NASA Astrophysics Data System (ADS)
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
Helsel, Dennis R.; Gilliom, Robert J.
1986-01-01
Estimates of distributional parameters (mean, standard deviation, median, interquartile range) are often desired for data sets containing censored observations. Eight methods for estimating these parameters have been evaluated by R. J. Gilliom and D. R. Helsel (this issue) using Monte Carlo simulations. To verify those findings, the same methods are now applied to actual water quality data. The best method (lowest root-mean-squared error (rmse)) over all parameters, sample sizes, and censoring levels is log probability regression (LR), the method found best in the Monte Carlo simulations. Best methods for estimating moment or percentile parameters separately are also identical to the simulations. Reliability of these estimates can be expressed as confidence intervals using rmse and bias values taken from the simulation results. Finally, a new simulation study shows that best methods for estimating uncensored sample statistics from censored data sets are identical to those for estimating population parameters. Thus this study and the companion study by Gilliom and Helsel form the basis for making the best possible estimates of either population parameters or sample statistics from censored water quality data, and for assessments of their reliability.
NASA Technical Reports Server (NTRS)
Slater, John W.; Saunders, John D.
2010-01-01
Methods of computational fluid dynamics were applied to simulate the aerodynamics within the turbine flowpath of a turbine-based combined-cycle propulsion system during inlet mode transition at Mach 4. Inlet mode transition involved the rotation of a splitter cowl to close the turbine flowpath to allow the full operation of a parallel dual-mode ramjet/scramjet flowpath. Steady-state simulations were performed at splitter cowl positions of 0deg, -2deg, -4deg, and -5.7deg, at which the turbine flowpath was closed half way. The simulations satisfied one objective of providing a greater understanding of the flow during inlet mode transition. Comparisons of the simulation results with wind-tunnel test data addressed another objective of assessing the applicability of the simulation methods for simulating inlet mode transition. The simulations showed that inlet mode transition could occur in a stable manner and that accurate modeling of the interactions among the shock waves, boundary layers, and porous bleed regions was critical for evaluating the inlet static and total pressures, bleed flow rates, and bleed plenum pressures. The simulations compared well with some of the wind-tunnel data, but uncertainties in both the windtunnel data and simulations prevented a formal evaluation of the accuracy of the simulation methods.
Kim, Eunsook
2018-02-01
Simulation education is a learning method for improving self-efficacy and critical thinking skills. However, not much study has been done on how to use it for education on emergency cardiac arrest situations, for which a multidisciplinary team approach is required. This study investigated the effects of simulation education on nursing students' self-efficacy and critical thinking skills in emergency cardiac arrest situations. A quasi-experimental research approach with a crossover design was used to compare two types of simulation instruction methods. This study was conducted with 76 nursing students divided into two groups by order of instruction methods, in November and December 2016. Both groups of participants experienced a simulation lesson based on the same emergency scenario. Group A first completed a roleplay of an emergency cardiac arrest situation in a clinical setting, while Group B first listened to a lecture on the procedure. After ten days, Group A repeated the simulation exercise after listening to the lecture, while Group B completed the simulation exercise after the roleplay. The students' self-efficacy and critical thinking skills were measured using a questionnaire before and after each session. In the first session, self-efficacy and critical thinking skills scores increased greatly from pretest to posttest for Group A in comparison to Group B; no statistically significant difference was found between the two groups. In the second session, Group B showed a significant increase between pretest and posttest, while Group A showed no significant difference. Conducting the simulation exercise after the roleplay was a more effective teaching method than conducting it after the lecture. Moreover, having the nursing students assume various roles in realistic roleplay situations combined with simulation exercises led to a deeper understanding of clinical situations and improved their self-efficacy and critical thinking skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Huang, C.; Hsu, N.
2013-12-01
This study imports Low-Impact Development (LID) technology of rainwater catchment systems into a Storm-Water runoff Management Model (SWMM) to design the spatial capacity and quantity of rain barrel for urban flood mitigation. This study proposes a simulation-optimization model for effectively searching the optimal design. In simulation method, we design a series of regular spatial distributions of capacity and quantity of rainwater catchment facilities, and thus the reduced flooding circumstances using a variety of design forms could be simulated by SWMM. Moreover, we further calculate the net benefit that is equal to subtract facility cost from decreasing inundation loss and the best solution of simulation method would be the initial searching solution of the optimization model. In optimizing method, first we apply the outcome of simulation method and Back-Propagation Neural Network (BPNN) for developing a water level simulation model of urban drainage system in order to replace SWMM which the operating is based on a graphical user interface and is hard to combine with optimization model and method. After that we embed the BPNN-based simulation model into the developed optimization model which the objective function is minimizing the negative net benefit. Finally, we establish a tabu search-based algorithm to optimize the planning solution. This study applies the developed method in Zhonghe Dist., Taiwan. Results showed that application of tabu search and BPNN-based simulation model into the optimization model not only can find better solutions than simulation method in 12.75%, but also can resolve the limitations of previous studies. Furthermore, the optimized spatial rain barrel design can reduce 72% of inundation loss according to historical flood events.
NASA Astrophysics Data System (ADS)
Moise Famien, Adjoua; Defrance, Dimitri; Sultan, Benjamin; Janicot, Serge; Vrac, Mathieu
2017-04-01
Different CMIP exercises show that the simulations of the future/current temperature and precipitation are complex with a high uncertainty degree. For example, the African monsoon system is not correctly simulated and most of the CMIP5 models underestimate the precipitation. Therefore, Global Climate Models (GCMs) show significant systematic biases that require bias correction before it can be used in impacts studies. Several methods of bias corrections have been developed for several years and are increasingly using more complex statistical methods. The aims of this work is to show the interest of the CDFt (Cumulative Distribution Function transfom (Michelangeli et al.,2009)) method to reduce the data bias from 29 CMIP5 GCMs over Africa and to assess the impact of bias corrected data on crop yields prediction by the end of the 21st century. In this work, we apply the CDFt to daily data covering the period from 1950 to 2099 (Historical and RCP8.5) and we correct the climate variables (temperature, precipitation, solar radiation, wind) by the use of the new daily database from the EU project WATer and global CHange (WATCH) available from 1979 to 2013 as reference data. The performance of the method is assessed in several cases. First, data are corrected based on different calibrations periods and are compared, on one hand, with observations to estimate the sensitivity of the method to the calibration period and, on other hand, with another bias-correction method used in the ISIMIP project. We find that, whatever the calibration period used, CDFt corrects well the mean state of variables and preserves their trend, as well as daily rainfall occurrence and intensity distributions. However, some differences appear when compared to the outputs obtained with the method used in ISIMIP and show that the quality of the correction is strongly related to the reference data. Secondly, we validate the bias correction method with the agronomic simulations (SARRA-H model (Kouressy et al., 2008)) by comparison with FAO crops yields estimations over West Africa. Impact simulations show that crop model is sensitive to input data. They show also decreasing in crop yields by the end of this century. Michelangeli, P. A., Vrac, M., & Loukos, H. (2009). Probabilistic downscaling approaches: Application to wind cumulative distribution functions. Geophysical Research Letters, 36(11). Kouressy M, Dingkuhn M, Vaksmann M and Heinemann A B 2008: Adaptation to diverse semi-arid environments of sorghum genotypes having different plant type and sensitivity to photoperiod. Agric. Forest Meteorol., http://dx.doi.org/10.1016/j.agrformet.2007.09.009
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Simulation of two-phase flow in horizontal fracture networks with numerical manifold method
NASA Astrophysics Data System (ADS)
Ma, G. W.; Wang, H. D.; Fan, L. F.; Wang, B.
2017-10-01
The paper presents simulation of two-phase flow in discrete fracture networks with numerical manifold method (NMM). Each phase of fluids is considered to be confined within the assumed discrete interfaces in the present method. The homogeneous model is modified to approach the mixed fluids. A new mathematical cover formation for fracture intersection is proposed to satisfy the mass conservation. NMM simulations of two-phase flow in a single fracture, intersection, and fracture network are illustrated graphically and validated by the analytical method or the finite element method. Results show that the motion status of discrete interface significantly depends on the ratio of mobility of two fluids rather than the value of the mobility. The variation of fluid velocity in each fracture segment and the driven fluid content are also influenced by the ratio of mobility. The advantages of NMM in the simulation of two-phase flow in a fracture network are demonstrated in the present study, which can be further developed for practical engineering applications.
Gaussian theory for spatially distributed self-propelled particles
NASA Astrophysics Data System (ADS)
Seyed-Allaei, Hamid; Schimansky-Geier, Lutz; Ejtehadi, Mohammad Reza
2016-12-01
Obtaining a reduced description with particle and momentum flux densities outgoing from the microscopic equations of motion of the particles requires approximations. The usual method, we refer to as truncation method, is to zero Fourier modes of the orientation distribution starting from a given number. Here we propose another method to derive continuum equations for interacting self-propelled particles. The derivation is based on a Gaussian approximation (GA) of the distribution of the direction of particles. First, by means of simulation of the microscopic model, we justify that the distribution of individual directions fits well to a wrapped Gaussian distribution. Second, we numerically integrate the continuum equations derived in the GA in order to compare with results of simulations. We obtain that the global polarization in the GA exhibits a hysteresis in dependence on the noise intensity. It shows qualitatively the same behavior as we find in particles simulations. Moreover, both global polarizations agree perfectly for low noise intensities. The spatiotemporal structures of the GA are also in agreement with simulations. We conclude that the GA shows qualitative agreement for a wide range of noise intensities. In particular, for low noise intensities the agreement with simulations is better as other approximations, making the GA to an acceptable candidates of describing spatially distributed self-propelled particles.
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1992-01-01
For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.
Numerical integration of detector response functions via Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Numerical integration of detector response functions via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...
2017-06-13
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
A Study of Impact Point Detecting Method Based on Seismic Signal
NASA Astrophysics Data System (ADS)
Huo, Pengju; Zhang, Yu; Xu, Lina; Huang, Yong
The projectile landing position has to be determined for its recovery and range in the targeting test. In this paper, a global search method based on the velocity variance is proposed. In order to verify the applicability of this method, simulation analysis within the scope of four million square meters has been conducted in the same array structure of the commonly used linear positioning method, and MATLAB was used to compare and analyze the two methods. The compared simulation results show that the global search method based on the speed of variance has high positioning accuracy and stability, which can meet the needs of impact point location.
Yang, Jin; Hlavacek, William S.
2011-01-01
Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie’s method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e., long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e., time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that ligand-induced aggregation of receptors yields a large connected receptor cluster, the rejection-free method is more efficient. PMID:21832806
Ray tracing the Wigner distribution function for optical simulations
NASA Astrophysics Data System (ADS)
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.
Modeling and simulation of ocean wave propagation using lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Nuraiman, Dian
2017-10-01
In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.
Attitude algorithm and initial alignment method for SINS applied in short-range aircraft
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hui; He, Zhao-Cheng; You, Feng; Chen, Bo
2017-07-01
This paper presents an attitude solution algorithm based on the Micro-Electro-Mechanical System and quaternion method. We completed the numerical calculation and engineering practice by adopting fourth-order Runge-Kutta algorithm in the digital signal processor. The state space mathematical model of initial alignment in static base was established, and the initial alignment method based on Kalman filter was proposed. Based on the hardware in the loop simulation platform, the short-range flight simulation test and the actual flight test were carried out. The results show that the error of pitch, yaw and roll angle is fast convergent, and the fitting rate between flight simulation and flight test is more than 85%.
Curuksu, Jeremy; Zacharias, Martin
2009-03-14
Although molecular dynamics (MD) simulations have been applied frequently to study flexible molecules, the sampling of conformational states separated by barriers is limited due to currently possible simulation time scales. Replica-exchange (Rex)MD simulations that allow for exchanges between simulations performed at different temperatures (T-RexMD) can achieve improved conformational sampling. However, in the case of T-RexMD the computational demand grows rapidly with system size. A Hamiltonian RexMD method that specifically enhances coupled dihedral angle transitions has been developed. The method employs added biasing potentials as replica parameters that destabilize available dihedral substates and was applied to study coupled dihedral transitions in nucleic acid molecules. The biasing potentials can be either fixed at the beginning of the simulation or optimized during an equilibration phase. The method was extensively tested and compared to conventional MD simulations and T-RexMD simulations on an adenine dinucleotide system and on a DNA abasic site. The biasing potential RexMD method showed improved sampling of conformational substates compared to conventional MD simulations similar to T-RexMD simulations but at a fraction of the computational demand. It is well suited to study systematically the fine structure and dynamics of large nucleic acids under realistic conditions including explicit solvent and ions and can be easily extended to other types of molecules.
Simulation of Rutherford backscattering spectrometry from arbitrary atom structures
Zhang, S.; Univ. of Helsinki; Nordlund, Kai; ...
2016-10-25
Rutherford backscattering spectrometry in a channeling direction (RBS/C) is a powerful tool for analysis of the fraction of atoms displaced from their lattice positions. However, it is in many cases not straightforward to analyze what is the actual defect structure underlying the RBS/C signal. To reveal insights of RBS/C signals from arbitrarily complex defective atomic structures, we develop in this paper a method for simulating the RBS/C spectrum from a set of arbitrary read-in atom coordinates (obtained, e.g., from molecular dynamics simulations). We apply the developed method to simulate the RBS/C signals from Ni crystal structures containing randomly displaced atoms,more » Frenkel point defects, and extended defects, respectively. The RBS/C simulations show that, even for the same number of atoms in defects, the RBS/C signal is much stronger for the extended defects. Finally, comparison with experimental results shows that the disorder profile obtained from RBS/C signals in ion-irradiated Ni is due to a small fraction of extended defects rather than a large number of individual random atoms.« less
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
NASA Astrophysics Data System (ADS)
Saito, Asaki; Yasutomi, Shin-ichi; Tamura, Jun-ichi; Ito, Shunji
2015-06-01
We introduce a true orbit generation method enabling exact simulations of dynamical systems defined by arbitrary-dimensional piecewise linear fractional maps, including piecewise linear maps, with rational coefficients. This method can generate sufficiently long true orbits which reproduce typical behaviors (inherent behaviors) of these systems, by properly selecting algebraic numbers in accordance with the dimension of the target system, and involving only integer arithmetic. By applying our method to three dynamical systems—that is, the baker's transformation, the map associated with a modified Jacobi-Perron algorithm, and an open flow system—we demonstrate that it can reproduce their typical behaviors that have been very difficult to reproduce with conventional simulation methods. In particular, for the first two maps, we show that we can generate true orbits displaying the same statistical properties as typical orbits, by estimating the marginal densities of their invariant measures. For the open flow system, we show that an obtained true orbit correctly converges to the stable period-1 orbit, which is inherently possessed by the system.
Kann, Z R; Skinner, J L
2014-09-14
Non-polarizable models for ions and water quantitatively and qualitatively misrepresent the salt concentration dependence of water diffusion in electrolyte solutions. In particular, experiment shows that the water diffusion coefficient increases in the presence of salts of low charge density (e.g., CsI), whereas the results of simulations with non-polarizable models show a decrease of the water diffusion coefficient in all alkali halide solutions. We present a simple charge-scaling method based on the ratio of the solvent dielectric constants from simulation and experiment. Using an ion model that was developed independently of a solvent, i.e., in the crystalline solid, this method improves the water diffusion trends across a range of water models. When used with a good-quality water model, e.g., TIP4P/2005 or E3B, this method recovers the qualitative behaviour of the water diffusion trends. The model and method used were also shown to give good results for other structural and dynamic properties including solution density, radial distribution functions, and ion diffusion coefficients.
A fast exact simulation method for a class of Markov jump processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yao, E-mail: yaoli@math.umass.edu; Hu, Lili, E-mail: lilyhu86@gmail.com
2015-11-14
A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze itsmore » properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.« less
NASA Astrophysics Data System (ADS)
Huang, W. D.; Fan, H. G.; Chen, N. X.
2012-11-01
To study the interaction between the transient flow in pipe and the unsteady turbulent flow in turbine, a coupled model of the transient flow in the pipe and three-dimensional unsteady flow in the turbine is developed based on the method of characteristics and the fluid governing equation in the accelerated rotational relative coordinate. The load-rejection process under the closing of guide vanes of the hydraulic power plant is simulated by the coupled method, the traditional transient simulation method and traditional three-dimensional unsteady flow calculation method respectively and the results are compared. The pressure, unit flux and rotation speed calculated by three methods show a similar change trend. However, because the elastic water hammer in the pipe and the pressure fluctuation in the turbine have been considered in the coupled method, the increase of pressure at spiral inlet is higher and the pressure fluctuation in turbine is stronger.
Simulation study on the trembling shear behavior of eletrorheological fluid.
Yang, F; Gong, X L; Xuan, S H; Jiang, W Q; Jiang, C X; Zhang, Z
2011-07-01
The trembling shear behavior of electrorheological (ER) fluids has been investigated by using a computer simulation method, and a shear-slide boundary model is proposed to understand this phenomenon. A thiourea-doped Ba-Ti-O ER fluid which shows a trembling shear behavior was first prepared and then systematically studied by both theoretical and experimental methods. The shear curves of ER fluids in the dynamic state were simulated with shear rates from 0.1 to 1000 s(-1) under different electric fields. The simulation results of the flow curves match the experimental results very well. The trembling shear curves are divided into four regions and each region can be explained by the proposed model.
Three-dimensional microstructure simulation of Ni-based superalloy investment castings
NASA Astrophysics Data System (ADS)
Pan, Dong; Xu, Qingyan; Liu, Baicheng
2011-05-01
An integrated macro and micro multi-scale model for the three-dimensional microstructure simulation of Ni-based superalloy investment castings was developed, and applied to industrial castings to investigate grain evolution during solidification. A ray tracing method was used to deal with the complex heat radiation transfer. The microstructure evolution was simulated based on the Modified Cellular Automaton method, which was coupled with three-dimensional nested macro and micro grids. Experiments for Ni-based superalloy turbine wheel investment casting were carried out, which showed a good correspondence with the simulated results. It is indicated that the proposed model is able to predict the microstructure of the casting precisely, which provides a tool for the optimizing process.
Research on LQR optimal control method of active engine mount
NASA Astrophysics Data System (ADS)
Huan, Xie; Yu, Duan
2018-04-01
In this paper, the LQR control method is applied to the active mount of the engine, and a six-cylinder engine excitation model is established. Through the joint simulation of AMESim and MATLAB, the vibration isolation performance of the active mount system and the passive mount system is analyzed. Excited by the multi-engine operation, the simulation results of the vertical displacement, acceleration and dynamic deflection of the vehicle body show that the vibration isolation capability of the active mount system is superior to that of the passive mount system. It shows that compared with the passive mount, LQR active mount can greatly improve the vibration isolation performance, which proves the feasibility and effectiveness of the LQR control method.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Wang, B.; Wang, Y.
2007-12-01
Recently, a new data assimilation method called “3-dimensional variational data assimilation of mapped observation (3DVM)” has been developed by the authors. We have shown that the new method is very efficient and inexpensive compared with its counterpart 4-dimensional variational data assimilation (4DVar). The new method has been implemented into the Penn State/NCAR mesoscale model MM5V1 (MM5_3DVM). In this study, we apply the new method to the bogus data assimilation (BDA) available in the original MM5 with the 4DVar. By the new approach, a specified sea-level pressure (SLP) field (bogus data) is incorporated into MM5 through the 3DVM (for convenient, we call it variational bogus mapped data assimilation - BMDA) instead of the original 4DVar data assimilation. To demonstrate the effectiveness of the new 3DVM method, initialization and simulation of a landfalling typhoon - typhoon Dan (1999) over the western North Pacific with the new method are compared with that with its counterpart 4DVar in MM5. Results show that the initial structure and the simulated intensity and track are improved more significantly using 3DVM than 4DVar. Sensitivity experiments also show that the simulated typhoon track and intensity are more sensitive to the size of the assimilation window in the 4DVar than that in the 3DVM. Meanwhile, 3DVM takes much less computing cost than its counterpart 4DVar for a given time window.
Estimating non-circular motions in barred galaxies using numerical N-body simulations
NASA Astrophysics Data System (ADS)
Randriamampandry, T. H.; Combes, F.; Carignan, C.; Deg, N.
2015-12-01
The observed velocities of the gas in barred galaxies are a combination of the azimuthally averaged circular velocity and non-circular motions, primarily caused by gas streaming along the bar. These non-circular flows must be accounted for before the observed velocities can be used in mass modelling. In this work, we examine the performance of the tilted-ring method and the DISKFIT algorithm for transforming velocity maps of barred spiral galaxies into rotation curves (RCs) using simulated data. We find that the tilted-ring method, which does not account for streaming motions, under-/overestimates the circular motions when the bar is parallel/perpendicular to the projected major axis. DISKFIT, which does include streaming motions, is limited to orientations where the bar is not aligned with either the major or minor axis of the image. Therefore, we propose a method of correcting RCs based on numerical simulations of galaxies. We correct the RC derived from the tilted-ring method based on a numerical simulation of a galaxy with similar properties and projections as the observed galaxy. Using observations of NGC 3319, which has a bar aligned with the major axis, as a test case, we show that the inferred mass models from the uncorrected and corrected RCs are significantly different. These results show the importance of correcting for the non-circular motions and demonstrate that new methods of accounting for these motions are necessary as current methods fail for specific bar alignments.
Using Communication Technology to Enhance Interprofessional Education Simulations.
Shrader, Sarah; Kostoff, Matthew; Shin, Tiffany; Heble, Annie; Kempin, Brian; Miller, Astyn; Patykiewicz, Nick
2016-02-25
To determine the impact of simulations using an alternative method of communication on students' satisfaction, attitudes, confidence, and performance related to interprofessional communication. One hundred sixty-three pharmacy students participated in a required applications-based capstone course. Students were randomly assigned to one of three interprofessional education (IPE) simulations with other health professions students using communication methods such as telephone, e-mail, and video conferencing. Pharmacy students completed a validated survey instrument, Attitude Toward Healthcare Teams Scale (ATHCTS) prior to and after course participation. Significant positive changes occurred for 5 out of 20 items. Written reflection papers and student satisfaction surveys completed after participation showed positive themes and satisfaction. Course instructors evaluated student performance using rubrics for formative feedback. Implementation of IPE simulations using various methods of communication technology is an effective way for pharmacy schools to incorporate IPE into their curriculum.
Hybrid thermal link-wise artificial compressibility method
NASA Astrophysics Data System (ADS)
Obrecht, Christian; Kuznik, Frédéric
2015-10-01
Thermal flow prediction is a subject of interest from a scientific and engineering points of view. Our motivation is to develop an accurate, easy to implement and highly scalable method for convective flows simulation. To this end, we present an extension to the link-wise artificial compressibility method (LW-ACM) for thermal simulation of weakly compressible flows. The novel hybrid formulation uses second-order finite difference operators of the energy equation based on the same stencils as the LW-ACM. For validation purposes, the differentially heated cubic cavity was simulated. The simulations remained stable for Rayleigh numbers up to Ra =108. The Nusselt numbers at isothermal walls and dynamics quantities are in good agreement with reference values from the literature. Our results show that the hybrid thermal LW-ACM is an effective and easy-to-use solution to solve convective flows.
NASA Astrophysics Data System (ADS)
He, Liping; Lu, Gang; Chen, Dachuan; Li, Wenjun; Lu, Chunsheng
2017-07-01
This paper investigates the three-dimensional (3D) injection molding flow of short fiber-reinforced polymer composites using a smoothed particle hydrodynamics (SPH) simulation method. The polymer melt was modeled as a power law fluid and the fibers were considered as rigid cylindrical bodies. The filling details and fiber orientation in the injection-molding process were studied. The results indicated that the SPH method could effectively predict the order of filling, fiber accumulation, and heterogeneous distribution of fibers. The SPH simulation also showed that fibers were mainly aligned to the flow direction in the skin layer and inclined to the flow direction in the core layer. Additionally, the fiber-orientation state in the simulation was quantitatively analyzed and found to be consistent with the results calculated by conventional tensor methods.
Shiraishi, Emi; Maeda, Kazuhiro; Kurata, Hiroyuki
2009-02-01
Numerical simulation of differential equation systems plays a major role in the understanding of how metabolic network models generate particular cellular functions. On the other hand, the classical and technical problems for stiff differential equations still remain to be solved, while many elegant algorithms have been presented. To relax the stiffness problem, we propose new practical methods: the gradual update of differential-algebraic equations based on gradual application of the steady-state approximation to stiff differential equations, and the gradual update of the initial values in differential-algebraic equations. These empirical methods show a high efficiency for simulating the steady-state solutions for the stiff differential equations that existing solvers alone cannot solve. They are effective in extending the applicability of dynamic simulation to biochemical network models.
Fung, Lillia; Boet, Sylvain; Bould, M Dylan; Qosa, Haytham; Perrier, Laure; Tricco, Andrea; Tavares, Walter; Reeves, Scott
2015-01-01
Crisis resource management (CRM) abilities are important for different healthcare providers to effectively manage critical clinical events. This study aims to review the effectiveness of simulation-based CRM training for interprofessional and interdisciplinary teams compared to other instructional methods (e.g., didactics). Interprofessional teams are composed of several professions (e.g., nurse, physician, midwife) while interdisciplinary teams are composed of several disciplines from the same profession (e.g., cardiologist, anaesthesiologist, orthopaedist). Medline, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, and ERIC were searched using terms related to CRM, crisis management, crew resource management, teamwork, and simulation. Trials comparing simulation-based CRM team training versus any other methods of education were included. The educational interventions involved interprofessional or interdisciplinary healthcare teams. The initial search identified 7456 publications; 12 studies were included. Simulation-based CRM team training was associated with significant improvements in CRM skill acquisition in all but two studies when compared to didactic case-based CRM training or simulation without CRM training. Of the 12 included studies, one showed significant improvements in team behaviours in the workplace, while two studies demonstrated sustained reductions in adverse patient outcomes after a single simulation-based CRM team intervention. In conclusion, CRM simulation-based training for interprofessional and interdisciplinary teams show promise in teaching CRM in the simulator when compared to didactic case-based CRM education or simulation without CRM teaching. More research, however, is required to demonstrate transfer of learning to workplaces and potential impact on patient outcomes.
Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang
Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
Lattice Boltzmann simulations of flapping wings: The flock effect and the lateral wind effect
NASA Astrophysics Data System (ADS)
de Rosis, Alessandro
2014-02-01
In this paper, numerical analysis aiming at simulating biological organisms immersed in a fluid are carried out. The fluid domain is modeled through the lattice Boltzmann (LB) method, while the immersed boundary method is used to account for the position of the organisms idealized as rigid bodies. The time discontinuous Galerkin method is employed to compute body motion. An explicit coupling strategy to combine the adopted numerical methods is proposed. The vertical take-off of a couple of butterflies is numerically simulated in different scenarios, showing the mutual interaction that a butterfly exerts on the other one. Moreover, the effect of lateral wind is investigated. A critical threshold value of the lateral wind is defined, thus corresponding to an increasing arduous take-off.
NASA Astrophysics Data System (ADS)
Greene, Patrick T.; Eldredge, Jeff D.; Zhong, Xiaolin; Kim, John
2016-07-01
In this paper, we present a method for performing uniformly high-order direct numerical simulations of high-speed flows over arbitrary geometries. The method was developed with the goal of simulating and studying the effects of complex isolated roughness elements on the stability of hypersonic boundary layers. The simulations are carried out on Cartesian grids with the geometries imposed by a third-order cut-stencil method. A fifth-order hybrid weighted essentially non-oscillatory scheme was implemented to capture any steep gradients in the flow created by the geometries and a third-order Runge-Kutta method is used for time advancement. A multi-zone refinement method was also utilized to provide extra resolution at locations with expected complex physics. The combination results in a globally fourth-order scheme in space and third order in time. Results confirming the method's high order of convergence are shown. Two-dimensional and three-dimensional test cases are presented and show good agreement with previous results. A simulation of Mach 3 flow over the logo of the Ubuntu Linux distribution is shown to demonstrate the method's capabilities for handling complex geometries. Results for Mach 6 wall-bounded flow over a three-dimensional cylindrical roughness element are also presented. The results demonstrate that the method is a promising tool for the study of hypersonic roughness-induced transition.
Mspire-Simulator: LC-MS shotgun proteomic simulator for creating realistic gold standard data.
Noyce, Andrew B; Smith, Rob; Dalgleish, James; Taylor, Ryan M; Erb, K C; Okuda, Nozomu; Prince, John T
2013-12-06
The most important step in any quantitative proteomic pipeline is feature detection (aka peak picking). However, generating quality hand-annotated data sets to validate the algorithms, especially for lower abundance peaks, is nearly impossible. An alternative for creating gold standard data is to simulate it with features closely mimicking real data. We present Mspire-Simulator, a free, open-source shotgun proteomic simulator that goes beyond previous simulation attempts by generating LC-MS features with realistic m/z and intensity variance along with other noise components. It also includes machine-learned models for retention time and peak intensity prediction and a genetic algorithm to custom fit model parameters for experimental data sets. We show that these methods are applicable to data from three different mass spectrometers, including two fundamentally different types, and show visually and analytically that simulated peaks are nearly indistinguishable from actual data. Researchers can use simulated data to rigorously test quantitation software, and proteomic researchers may benefit from overlaying simulated data on actual data sets.
A numerical simulation method and analysis of a complete thermoacoustic-Stirling engine.
Ling, Hong; Luo, Ercang; Dai, Wei
2006-12-22
Thermoacoustic prime movers can generate pressure oscillation without any moving parts on self-excited thermoacoustic effect. The details of the numerical simulation methodology for thermoacoustic engines are presented in the paper. First, a four-port network method is used to build the transcendental equation of complex frequency as a criterion to judge if temperature distribution of the whole thermoacoustic system is correct for the case with given heating power. Then, the numerical simulation of a thermoacoustic-Stirling heat engine is carried out. It is proved that the numerical simulation code can run robustly and output what one is interested in. Finally, the calculated results are compared with the experiments of the thermoacoustic-Stirling heat engine (TASHE). It shows that the numerical simulation can agrees with the experimental results with acceptable accuracy.
NASA Technical Reports Server (NTRS)
Stern, Boris E.; Svensson, Roland; Begelman, Mitchell C.; Sikora, Marek
1995-01-01
High-energy radiation processes in compact cosmic objects are often expected to have a strongly non-linear behavior. Such behavior is shown, for example, by electron-positron pair cascades and the time evolution of relativistic proton distributions in dense radiation fields. Three independent techniques have been developed to simulate these non-linear problems: the kinetic equation approach; the phase-space density (PSD) Monte Carlo method; and the large-particle (LP) Monte Carlo method. In this paper, we present the latest version of the LP method and compare it with the other methods. The efficiency of the method in treating geometrically complex problems is illustrated by showing results of simulations of 1D, 2D and 3D systems. The method is shown to be powerful enough to treat non-spherical geometries, including such effects as bulk motion of the background plasma, reflection of radiation from cold matter, and anisotropic distributions of radiating particles. It can therefore be applied to simulate high-energy processes in such astrophysical systems as accretion discs with coronae, relativistic jets, pulsar magnetospheres and gamma-ray bursts.
Computing pKa Values with a Mixing Hamiltonian Quantum Mechanical/Molecular Mechanical Approach.
Liu, Yang; Fan, Xiaoli; Jin, Yingdi; Hu, Xiangqian; Hu, Hao
2013-09-10
Accurate computation of the pKa value of a compound in solution is important but challenging. Here, a new mixing quantum mechanical/molecular mechanical (QM/MM) Hamiltonian method is developed to simulate the free-energy change associated with the protonation/deprotonation processes in solution. The mixing Hamiltonian method is designed for efficient quantum mechanical free-energy simulations by alchemically varying the nuclear potential, i.e., the nuclear charge of the transforming nucleus. In pKa calculation, the charge on the proton is varied in fraction between 0 and 1, corresponding to the fully deprotonated and protonated states, respectively. Inspired by the mixing potential QM/MM free energy simulation method developed previously [H. Hu and W. T. Yang, J. Chem. Phys. 2005, 123, 041102], this method succeeds many advantages of a large class of λ-coupled free-energy simulation methods and the linear combination of atomic potential approach. Theory and technique details of this method, along with the calculation results of the pKa of methanol and methanethiol molecules in aqueous solution, are reported. The results show satisfactory agreement with the experimental data.
Fractal propagation method enables realistic optical microscopy simulations in biological tissues
Glaser, Adam K.; Chen, Ye; Liu, Jonathan T.C.
2017-01-01
Current simulation methods for light transport in biological media have limited efficiency and realism when applied to three-dimensional microscopic light transport in biological tissues with refractive heterogeneities. We describe here a technique which combines a beam propagation method valid for modeling light transport in media with weak variations in refractive index, with a fractal model of refractive index turbulence. In contrast to standard simulation methods, this fractal propagation method (FPM) is able to accurately and efficiently simulate the diffraction effects of focused beams, as well as the microscopic heterogeneities present in tissue that result in scattering, refractive beam steering, and the aberration of beam foci. We validate the technique and the relationship between the FPM model parameters and conventional optical parameters used to describe tissues, and also demonstrate the method’s flexibility and robustness by examining the steering and distortion of Gaussian and Bessel beams in tissue with comparison to experimental data. We show that the FPM has utility for the accurate investigation and optimization of optical microscopy methods such as light-sheet, confocal, and nonlinear microscopy. PMID:28983499
Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions
Marinelli, Fabrizio; Faraldo-Gómez, José D.
2015-01-01
We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917
The Bravyi-Kitaev transformation for quantum computation of electronic structure
NASA Astrophysics Data System (ADS)
Seeley, Jacob T.; Richard, Martin J.; Love, Peter J.
2012-12-01
Quantum simulation is an important application of future quantum computers with applications in quantum chemistry, condensed matter, and beyond. Quantum simulation of fermionic systems presents a specific challenge. The Jordan-Wigner transformation allows for representation of a fermionic operator by O(n) qubit operations. Here, we develop an alternative method of simulating fermions with qubits, first proposed by Bravyi and Kitaev [Ann. Phys. 298, 210 (2002), 10.1006/aphy.2002.6254; e-print arXiv:quant-ph/0003137v2], that reduces the simulation cost to O(log n) qubit operations for one fermionic operation. We apply this new Bravyi-Kitaev transformation to the task of simulating quantum chemical Hamiltonians, and give a detailed example for the simplest possible case of molecular hydrogen in a minimal basis. We show that the quantum circuit for simulating a single Trotter time step of the Bravyi-Kitaev derived Hamiltonian for H2 requires fewer gate applications than the equivalent circuit derived from the Jordan-Wigner transformation. Since the scaling of the Bravyi-Kitaev method is asymptotically better than the Jordan-Wigner method, this result for molecular hydrogen in a minimal basis demonstrates the superior efficiency of the Bravyi-Kitaev method for all quantum computations of electronic structure.
Numerical sedimentation particle-size analysis using the Discrete Element Method
NASA Astrophysics Data System (ADS)
Bravo, R.; Pérez-Aparicio, J. L.; Gómez-Hernández, J. J.
2015-12-01
Sedimentation tests are widely used to determine the particle size distribution of a granular sample. In this work, the Discrete Element Method interacts with the simulation of flow using the well known one-way-coupling method, a computationally affordable approach for the time-consuming numerical simulation of the hydrometer, buoyancy and pipette sedimentation tests. These tests are used in the laboratory to determine the particle-size distribution of fine-grained aggregates. Five samples with different particle-size distributions are modeled by about six million rigid spheres projected on two-dimensions, with diameters ranging from 2.5 ×10-6 m to 70 ×10-6 m, forming a water suspension in a sedimentation cylinder. DEM simulates the particle's movement considering laminar flow interactions of buoyant, drag and lubrication forces. The simulation provides the temporal/spatial distributions of densities and concentrations of the suspension. The numerical simulations cannot replace the laboratory tests since they need the final granulometry as initial data, but, as the results show, these simulations can identify the strong and weak points of each method and eventually recommend useful variations and draw conclusions on their validity, aspects very difficult to achieve in the laboratory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Sownak; Li, Baojiu; He, Jian-hua
We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less
Computer simulation and implementation of defected ground structure on a microstrip antenna
NASA Astrophysics Data System (ADS)
Adrian, H.; Rambe, A. H.; Suherman
2018-03-01
Defected Ground Structure (DGS) is a method reducing etching area on antenna ground to form desirable antenna’s ground field. This paper reports the method impact on microstrip antennas working on 1800 and 2400 MHz. These frequencies are important as many radio network applications such mobile phones and wireless devices working on these channels. The assessments were performed by simulating and fabricating the evaluated antennas. Both simulation data and implementation measurements show that DGS successfully improves antenna performances by increasing bandwidth up to 19%, reducing return loss up to 109% and increasing gain up to 33%.
Numerical Simulation of Selecting Model Scale of Cable in Wind Tunnel Test
NASA Astrophysics Data System (ADS)
Huang, Yifeng; Yang, Jixin
The numerical simulation method based on computational Fluid Dynamics (CFD) provides a possible alternative means of physical wind tunnel test. Firstly, the correctness of the numerical simulation method is validated by one certain example. In order to select the minimum length of the cable as to a certain diameter in the numerical wind tunnel tests, the numerical wind tunnel tests based on CFD are carried out on the cables with several different length-diameter ratios (L/D). The results show that, when the L/D reaches to 18, the drag coefficient is stable essentially.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of themore » intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.« less
NASA Astrophysics Data System (ADS)
Lou, Jincheng; Tilton, Nils
2017-11-01
Membrane distillation (MD) is a method of desalination with boundary layers that are challenging to simulate. MD is a thermal process in which warm feed and cool distilled water flow on opposite sides of a hydrophobic membrane. The temperature difference causes water to evaporate from the feed, travel through the membrane, and condense in the distillate. Two challenges to MD are temperature and concentration polarization. Temperature polarization represents a reduction in the transmembrane temperature difference due to heat transfer through the membrane. Concentration polarization describes the accumulation of solutes near the membrane. These phenomena reduce filtration and lead to membrane fouling. They are difficult to simulate due to the coupling between the velocity, temperature, and concentration fields on the membrane. Unsteady regimes are particularly challenging because noise at the outlets can pollute the near-membrane flow fields. We present the development of a finite-volume method for the simulation of fluid flow, heat, and mass transport in MD systems. Using the method, we perform a parametric study of the polarization boundary layers, and show that the concentration boundary layer shows self-similar behavior that satisfies power laws for the downstream growth. Funded by the U.S. Bureau of Reclamation.
Ray tracing simulation of aero-optical effect using multiple gradient index layer
NASA Astrophysics Data System (ADS)
Yang, Seul Ki; Seong, Sehyun; Ryu, Dongok; Kim, Sug-Whan; Kwon, Hyeuknam; Jin, Sang-Hun; Jeong, Ho; Kong, Hyun Bae; Lim, Jae Wan; Choi, Jong Hwa
2016-10-01
We present a new ray tracing simulation of aero-optical effect through anisotropic inhomogeneous media as supersonic flow field surrounds a projectile. The new method uses multiple gradient-index (GRIN) layers for construction of the anisotropic inhomogeneous media and ray tracing simulation. The cone-shaped projectile studied has 19° semi-vertical angle; a sapphire window is parallel to the cone angle; and an optical system of the projectile was assumed via paraxial optics and infrared image detector. The condition for the steady-state solver conducted through computational fluid dynamics (CFD) included Mach numbers 4 and 6 in speed, 25 km altitude, and 0° angle of attack (AoA). The grid refractive index of the flow field via CFD analysis and Gladstone-Dale relation was discretized into equally spaced layers which are parallel with the projectile's window. Each layer was modeled as a form of 2D polynomial by fitting the refractive index distribution. The light source of ray set generated 3,228 rays for varying line of sight (LOS) from 10° to 40°. Ray tracing simulation adopted the Snell's law in 3D to compute the paths of skew rays in the GRIN layers. The results show that optical path difference (OPD) and boresight error (BSE) decreases exponentially as LOS increases. The variation of refractive index decreases, as the speed of flow field increases the OPD and its rate of decay at Mach number 6 in speed has somewhat larger value than at Mach number 4 in speed. Compared with the ray equation method, at Mach number 4 and 10° LOS, the new method shows good agreement, generated 0.33% of relative root-mean-square (RMS) OPD difference and 0.22% of relative BSE difference. Moreover, the simulation time of the new method was more than 20,000 times faster than the conventional ray equation method. The technical detail of the new method and simulation is presented with results and implication.
Voelz, David G; Roggemann, Michael C
2009-11-10
Accurate simulation of scalar optical diffraction requires consideration of the sampling requirement for the phase chirp function that appears in the Fresnel diffraction expression. We describe three sampling regimes for FFT-based propagation approaches: ideally sampled, oversampled, and undersampled. Ideal sampling, where the chirp and its FFT both have values that match analytic chirp expressions, usually provides the most accurate results but can be difficult to realize in practical simulations. Under- or oversampling leads to a reduction in the available source plane support size, the available source bandwidth, or the available observation support size, depending on the approach and simulation scenario. We discuss three Fresnel propagation approaches: the impulse response/transfer function (angular spectrum) method, the single FFT (direct) method, and the two-step method. With illustrations and simulation examples we show the form of the sampled chirp functions and their discrete transforms, common relationships between the three methods under ideal sampling conditions, and define conditions and consequences to be considered when using nonideal sampling. The analysis is extended to describe the sampling limitations for the more exact Rayleigh-Sommerfeld diffraction solution.
Response Matrix Monte Carlo for electron transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballinger, C.T.; Nielsen, D.E. Jr.; Rathkopf, J.A.
1990-11-01
A Response Matrix Monte Carol (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts tomore » combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. The combined effect of many collisions is modeled, like condensed history, except it is precalculated via an analog Monte Carol simulation. This avoids the scattering kernel assumptions associated with condensed history methods. Results show good agreement between the RMMC method and analog Monte Carlo. 11 refs., 7 figs., 1 tabs.« less
The Programming Language Python In Earth System Simulations
NASA Astrophysics Data System (ADS)
Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.
2004-12-01
Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
NASA Astrophysics Data System (ADS)
Castiglioni, Giacomo
Flows over airfoils and blades in rotating machinery, for unmanned and micro-aerial vehicles, wind turbines, and propellers consist of a laminar boundary layer near the leading edge that is often followed by a laminar separation bubble and transition to turbulence further downstream. Typical Reynolds averaged Navier-Stokes turbulence models are inadequate for such flows. Direct numerical simulation is the most reliable, but is also the most computationally expensive alternative. This work assesses the capability of immersed boundary methods and large eddy simulations to reduce the computational requirements for such flows and still provide high quality results. Two-dimensional and three-dimensional simulations of a laminar separation bubble on a NACA-0012 airfoil at Rec = 5x104 and at 5° of incidence have been performed with an immersed boundary code and a commercial code using body fitted grids. Several sub-grid scale models have been implemented in both codes and their performance evaluated. For the two-dimensional simulations with the immersed boundary method the results show good agreement with the direct numerical simulation benchmark data for the pressure coefficient Cp and the friction coefficient Cf, but only when using dissipative numerical schemes. There is evidence that this behavior can be attributed to the ability of dissipative schemes to damp numerical noise coming from the immersed boundary. For the three-dimensional simulations the results show a good prediction of the separation point, but an inaccurate prediction of the reattachment point unless full direct numerical simulation resolution is used. The commercial code shows good agreement with the direct numerical simulation benchmark data in both two and three-dimensional simulations, but the presence of significant, unquantified numerical dissipation prevents a conclusive assessment of the actual prediction capabilities of very coarse large eddy simulations with low order schemes in general cases. Additionally, a two-dimensional sweep of angles of attack from 0° to 5° is performed showing a qualitative prediction of the jump in lift and drag coefficients due to the appearance of the laminar separation bubble. The numerical dissipation inhibits the predictive capabilities of large eddy simulations whenever it is of the same order of magnitude or larger than the sub-grid scale dissipation. The need to estimate the numerical dissipation is most pressing for low-order methods employed by commercial computational fluid dynamics codes. Following the recent work of Schranner et al., the equations and procedure for estimating the numerical dissipation rate and the numerical viscosity in a commercial code are presented. The method allows for the computation of the numerical dissipation rate and numerical viscosity in the physical space for arbitrary sub-domains in a self-consistent way, using only information provided by the code in question. The method is first tested for a three-dimensional Taylor-Green vortex flow in a simple cubic domain and compared with benchmark results obtained using an accurate, incompressible spectral solver. Afterwards the same procedure is applied for the first time to a realistic flow configuration, specifically to the above discussed laminar separation bubble flow over a NACA 0012 airfoil. The method appears to be quite robust and its application reveals that for the code and the flow in question the numerical dissipation can be significantly larger than the viscous dissipation or the dissipation of the classical Smagorinsky sub-grid scale model, confirming the previously qualitative finding.
NASA Astrophysics Data System (ADS)
Kwon, Deuk-Chul; Shin, Sung-Sik; Yu, Dong-Hun
2017-10-01
In order to reduce the computing time in simulation of radio frequency (rf) plasma sources, various numerical schemes were developed. It is well known that the upwind, exponential, and power-law schemes can efficiently overcome the limitation on the grid size for fluid transport simulations of high density plasma discharges. Also, the semi-implicit method is a well-known numerical scheme to overcome on the simulation time step. However, despite remarkable advances in numerical techniques and computing power over the last few decades, efficient multi-dimensional modeling of low temperature plasma discharges has remained a considerable challenge. In particular, there was a difficulty on parallelization in time for the time periodic steady state problems such as capacitively coupled plasma discharges and rf sheath dynamics because values of plasma parameters in previous time step are used to calculate new values each time step. Therefore, we present a parallelization method for the time periodic steady state problems by using period-slices. In order to evaluate the efficiency of the developed method, one-dimensional fluid simulations are conducted for describing rf sheath dynamics. The result shows that speedup can be achieved by using a multithreading method.
2012-02-01
determine from experimental data [25]. Figure 11(a) shows the LE obtained from the simulation of equation (4) (using the method in [26]) against...laminated composite plates. Journal of Thermal Stresses , 10(4):345– 356, 1987. [6] K. D. Murphy, L. N. Virgin, and S. A. Rizzi. The effect of thermal... obtained by simulating the equation of mo- tion, and calculating the largest LE using the method in5. Once again the dashed vertical lines denote the
Research on Radiation Characteristic of Plasma Antenna through FDTD Method
Zhou, Jianming; Fang, Jingjing; Lu, Qiuyuan; Liu, Fan
2014-01-01
The radiation characteristic of plasma antenna is investigated by using the finite-difference time-domain (FDTD) approach in this paper. Through using FDTD method, we study the propagation of electromagnetic wave in free space in stretched coordinate. And the iterative equations of Maxwell equation are derived. In order to validate the correctness of this method, we simulate the process of electromagnetic wave propagating in free space. Results show that electromagnetic wave spreads out around the signal source and can be absorbed by the perfectly matched layer (PML). Otherwise, we study the propagation of electromagnetic wave in plasma by using the Boltzmann-Maxwell theory. In order to verify this theory, the whole process of electromagnetic wave propagating in plasma under one-dimension case is simulated. Results show that Boltzmann-Maxwell theory can be used to explain the phenomenon of electromagnetic wave propagating in plasma. Finally, the two-dimensional simulation model of plasma antenna is established under the cylindrical coordinate. And the near-field and far-field radiation pattern of plasma antenna are obtained. The experiments show that the variation of electron density can introduce the change of radiation characteristic. PMID:25114961
The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations
NASA Astrophysics Data System (ADS)
Degen, D.; Veroy, K.; Wellmann, F.
2017-12-01
Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.
Fast image-based mitral valve simulation from individualized geometry.
Villard, Pierre-Frederic; Hammer, Peter E; Perrin, Douglas P; Del Nido, Pedro J; Howe, Robert D
2018-04-01
Common surgical procedures on the mitral valve of the heart include modifications to the chordae tendineae. Such interventions are used when there is extensive leaflet prolapse caused by chordae rupture or elongation. Understanding the role of individual chordae tendineae before operating could be helpful to predict whether the mitral valve will be competent at peak systole. Biomechanical modelling and simulation can achieve this goal. We present a method to semi-automatically build a computational model of a mitral valve from micro CT (computed tomography) scans: after manually picking chordae fiducial points, the leaflets are segmented and the boundary conditions as well as the loading conditions are automatically defined. Fast finite element method (FEM) simulation is carried out using Simulation Open Framework Architecture (SOFA) to reproduce leaflet closure at peak systole. We develop three metrics to evaluate simulation results: (i) point-to-surface error with the ground truth reference extracted from the CT image, (ii) coaptation surface area of the leaflets and (iii) an indication of whether the simulated closed leaflets leak. We validate our method on three explanted porcine hearts and show that our model predicts the closed valve surface with point-to-surface error of approximately 1 mm, a reasonable coaptation surface area, and absence of any leak at peak systole (maximum closed pressure). We also evaluate the sensitivity of our model to changes in various parameters (tissue elasticity, mesh accuracy, and the transformation matrix used for CT scan registration). We also measure the influence of the positions of the chordae tendineae on simulation results and show that marginal chordae have a greater influence on the final shape than intermediate chordae. The mitral valve simulation can help the surgeon understand valve behaviour and anticipate the outcome of a procedure. Copyright © 2018 John Wiley & Sons, Ltd.
Sun, Rui; Dama, James F; Tan, Jeffrey S; Rose, John P; Voth, Gregory A
2016-10-11
Metadynamics is an important enhanced sampling technique in molecular dynamics simulation to efficiently explore potential energy surfaces. The recently developed transition-tempered metadynamics (TTMetaD) has been proven to converge asymptotically without sacrificing exploration of the collective variable space in the early stages of simulations, unlike other convergent metadynamics (MetaD) methods. We have applied TTMetaD to study the permeation of drug-like molecules through a lipid bilayer to further investigate the usefulness of this method as applied to problems of relevance to medicinal chemistry. First, ethanol permeation through a lipid bilayer was studied to compare TTMetaD with nontempered metadynamics and well-tempered metadynamics. The bias energies computed from various metadynamics simulations were compared to the potential of mean force calculated from umbrella sampling. Though all of the MetaD simulations agree with one another asymptotically, TTMetaD is able to predict the most accurate and reliable estimate of the potential of mean force for permeation in the early stages of the simulations and is robust to the choice of required additional parameters. We also show that using multiple randomly initialized replicas allows convergence analysis and also provides an efficient means to converge the simulations in shorter wall times and, more unexpectedly, in shorter CPU times; splitting the CPU time between multiple replicas appears to lead to less overall error. After validating the method, we studied the permeation of a more complicated drug-like molecule, trimethoprim. Three sets of TTMetaD simulations with different choices of collective variables were carried out, and all converged within feasible simulation time. The minimum free energy paths showed that TTMetaD was able to predict almost identical permeation mechanisms in each case despite significantly different definitions of collective variables.
3-D simulation of nanopore structure for DNA sequencing.
Park, Jun-Mo; Pak, Y Eugene; Chun, Honggu; Lee, Jong-Ho
2012-07-01
In this paper, we propose a method for simulating nanopore structure by using conventional 3-D simulation tool to mimic the I-V behavior of the nanopore structure. In the simulation, we use lightly doped silicon for ionic solution where some parameters like electron affinity and dielectric constant are fitted to consider the ionic solution. By using this method, we can simulate the I-V behavior of nanopore structure depending on the location and the size of the sphere shaped silicon oxide which is considered to be an indicator of a DNA base. In addition, we simulate an Ionic Field Effect Transistor (IFET) which has basically the nanopore structure, and show that the simulated curves follow sufficiently the I-V behavior of the measurement data. Therefore, we think it is reasonable to apply parameter modeling mentioned above to simulate nanopore structure. The key idea is to modify electron affinity of silicon which is used to mimic the KCl solution to avoid band bending and depletion inside the nanopore. We could efficiently utilize conventional 3-D simulation tool to simulate the I-V behavior of nanopore structures.
Thin-film designs by simulated annealing
NASA Astrophysics Data System (ADS)
Boudet, T.; Chaton, P.; Herault, L.; Gonon, G.; Jouanet, L.; Keller, P.
1996-11-01
With the increasing power of computers, new methods in synthesis of optical multilayer systems have appeared. Among these, the simulated-annealing algorithm has proved its efficiency in several fields of physics. We propose to show its performances in the field of optical multilayer systems through different filter designs.
A satellite-based radar wind sensor
NASA Technical Reports Server (NTRS)
Xin, Weizhuang
1991-01-01
The objective is to investigate the application of Doppler radar systems for global wind measurement. A model of the satellite-based radar wind sounder (RAWS) is discussed, and many critical problems in the designing process, such as the antenna scan pattern, tracking the Doppler shift caused by satellite motion, and backscattering of radar signals from different types of clouds, are discussed along with their computer simulations. In addition, algorithms for measuring mean frequency of radar echoes, such as the Fast Fourier Transform (FFT) estimator, the covariance estimator, and the estimators based on autoregressive models, are discussed. Monte Carlo computer simulations were used to compare the performance of these algorithms. Anti-alias methods are discussed for the FFT and the autoregressive methods. Several algorithms for reducing radar ambiguity were studied, such as random phase coding methods and staggered pulse repitition frequncy (PRF) methods. Computer simulations showed that these methods are not applicable to the RAWS because of the broad spectral widths of the radar echoes from clouds. A waveform modulation method using the concept of spread spectrum and correlation detection was developed to solve the radar ambiguity. Radar ambiguity functions were used to analyze the effective signal-to-noise ratios for the waveform modulation method. The results showed that, with suitable bandwidth product and modulation of the waveform, this method can achieve the desired maximum range and maximum frequency of the radar system.
Massively parallel multicanonical simulations
NASA Astrophysics Data System (ADS)
Gross, Jonathan; Zierenberg, Johannes; Weigel, Martin; Janke, Wolfhard
2018-03-01
Generalized-ensemble Monte Carlo simulations such as the multicanonical method and similar techniques are among the most efficient approaches for simulations of systems undergoing discontinuous phase transitions or with rugged free-energy landscapes. As Markov chain methods, they are inherently serial computationally. It was demonstrated recently, however, that a combination of independent simulations that communicate weight updates at variable intervals allows for the efficient utilization of parallel computational resources for multicanonical simulations. Implementing this approach for the many-thread architecture provided by current generations of graphics processing units (GPUs), we show how it can be efficiently employed with of the order of 104 parallel walkers and beyond, thus constituting a versatile tool for Monte Carlo simulations in the era of massively parallel computing. We provide the fully documented source code for the approach applied to the paradigmatic example of the two-dimensional Ising model as starting point and reference for practitioners in the field.
Dependability analysis of parallel systems using a simulation-based approach. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sawyer, Darren Charles
1994-01-01
The analysis of dependability in large, complex, parallel systems executing real applications or workloads is examined in this thesis. To effectively demonstrate the wide range of dependability problems that can be analyzed through simulation, the analysis of three case studies is presented. For each case, the organization of the simulation model used is outlined, and the results from simulated fault injection experiments are explained, showing the usefulness of this method in dependability modeling of large parallel systems. The simulation models are constructed using DEPEND and C++. Where possible, methods to increase dependability are derived from the experimental results. Another interesting facet of all three cases is the presence of some kind of workload of application executing in the simulation while faults are injected. This provides a completely new dimension to this type of study, not possible to model accurately with analytical approaches.
Evaluating average and atypical response in radiation effects simulations
NASA Astrophysics Data System (ADS)
Weller, R. A.; Sternberg, A. L.; Massengill, L. W.; Schrimpf, R. D.; Fleetwood, D. M.
2003-12-01
We examine the limits of performing single-event simulations using pre-averaged radiation events. Geant4 simulations show the necessity, for future devices, to supplement current methods with ensemble averaging of device-level responses to physically realistic radiation events. Initial Monte Carlo simulations have generated a significant number of extremal events in local energy deposition. These simulations strongly suggest that proton strikes of sufficient energy, even those that initiate purely electronic interactions, can initiate device response capable in principle of producing single event upset or microdose damage in highly scaled devices.
Dynamical Core in Atmospheric Model Does Matter in the Simulation of Arctic Climate
NASA Astrophysics Data System (ADS)
Jun, Sang-Yoon; Choi, Suk-Jin; Kim, Baek-Min
2018-03-01
Climate models using different dynamical cores can simulate significantly different winter Arctic climates even if equipped with virtually the same physics schemes. Current climate simulated by the global climate model using cubed-sphere grid with spectral element method (SE core) exhibited significantly warmer Arctic surface air temperature compared to that using latitude-longitude grid with finite volume method core. Compared to the finite volume method core, SE core simulated additional adiabatic warming in the Arctic lower atmosphere, and this was consistent with the eddy-forced secondary circulation. Downward longwave radiation further enhanced Arctic near-surface warming with a higher surface air temperature of about 1.9 K. Furthermore, in the atmospheric response to the reduced sea ice conditions with the same physical settings, only the SE core showed a robust cooling response over North America. We emphasize that special attention is needed in selecting the dynamical core of climate models in the simulation of the Arctic climate and associated teleconnection patterns.
NASA Technical Reports Server (NTRS)
Ross, M. D.; Linton, S. W.; Parnas, B. R.
2000-01-01
A quasi-three-dimensional finite-volume numerical simulator was developed to study passive voltage spread in vestibular macular afferents. The method, borrowed from computational fluid dynamics, discretizes events transpiring in small volumes over time. The afferent simulated had three calyces with processes. The number of processes and synapses, and direction and timing of synapse activation, were varied. Simultaneous synapse activation resulted in shortest latency, while directional activation (proximal to distal and distal to proximal) yielded most regular discharges. Color-coded visualizations showed that the simulator discretized events and demonstrated that discharge produced a distal spread of voltage from the spike initiator into the ending. The simulations indicate that directional input, morphology, and timing of synapse activation can affect discharge properties, as must also distal spread of voltage from the spike initiator. The finite volume method has generality and can be applied to more complex neurons to explore discrete synaptic effects in four dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalonde, A; Bouchard, H
Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less
Molecular simulations of diffusion in electrolytes
NASA Astrophysics Data System (ADS)
Wheeler, Dean Richard
This work demonstrates new methodologies for simulating multicomponent diffusion in concentrated solutions using molecular dynamics (MD). Experimental diffusion data for concentrated multicomponent solutions are often lacking, as are accurate methods of predicting diffusion for nonideal solutions. MD can be a viable means of understanding and predicting multicomponent diffusion. While there have been several prior reports of MD simulations of mutual diffusion, no satisfactory expressions for simulating Stefan-Maxwell diffusivities for an arbitrary number of species exist. The approaches developed here allow for the computation of a full diffusion matrix for any number of species in both nonequilibrium and equilibrium MD ensembles. Our nonequilibrium approach is based on the application of constant external fields to drive species diffusion. Our equilibrium approach uses a newly developed Green-Kubo formula for Stefan-Maxwell diffusivities. In addition, as part of this work, we demonstrate a widely applicable means of increasing the computational efficiency of the Ewald sum, a technique for handling long-range Coulombic interactions in simulations. The theoretical development is applicable to any solution which can be simulated using MD; nevertheless, our primary interest is in electrochemical applications. To this end, the methods are tested by simulations of aqueous salt solutions and lithium-battery electrolytes. KCl and NaCl aqueous solutions were simulated over the concentration range 1 to 4 molal. Intermolecular-potential models were parameterized for these transport-based simulations. This work is the first to simulate all three independent diffusion coefficients for aqueous NaCl and KCl solutions. The results show that the nonequilibrium and equilibrium methods are consistent with each other, and in moderate agreement with experiment. We simulate lithium-battery electrolytes containing LiPF6 in propylene carbonate and mixed ethylene carbonate-dimethyl carbonate solvents. As with the aqueous-solution work, potential parameters were generated for these molecules. These nonaqueous electrolytes demonstrate rich transport behavior, which the simulations are able to reproduce qualitatively. In a mixed-solvent simulation we regress all six independent transport coefficients. The simulations show that strong ion pairing is responsible for the increase in viscosity and maximum in conductivity as ion concentrations are increased.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
How to qualify and validate wear simulation devices and methods.
Heintze, S D
2006-08-01
The clinical significance of increased wear can mainly be attributed to impaired aesthetic appearance and/or functional restrictions. Little is known about the systemic effects of swallowed or inhaled worn particles that derive from restorations. As wear measurements in vivo are complicated and time-consuming, wear simulation devices and methods had been developed without, however, systematically looking at the factors that influence important wear parameters. Wear simulation devices shall simulate processes that occur in the oral cavity during mastication, namely force, force profile, contact time, sliding movement, clearance of worn material, etc. Different devices that use different force actuator principles are available. Those with the highest citation frequency in the literature are - in descending order - the Alabama, ACTA, OHSU, Zurich and MTS wear simulators. When following the FDA guidelines on good laboratory practice (GLP) only the expensive MTS wear simulator is a qualified machine to test wear in vitro; the force exerted by the hydraulic actuator is controlled and regulated during all movements of the stylus. All the other simulators lack control and regulation of force development during dynamic loading of the flat specimens. This may be an explanation for the high coefficient of variation of the results in some wear simulators (28-40%) and the poor reproducibility of wear results if dental databases are searched for wear results of specific dental materials (difference of 22-72% for the same material). As most of the machines are not qualifiable, wear methods applying the machine may have a sound concept but cannot be validated. Only with the MTS method have wear parameters and influencing factors been documented and verified. A good compromise with regard to costs, practicability and robustness is the Willytec chewing simulator, which uses weights as force actuator and step motors for vertical and lateral movements. The Ivoclar wear method run on the Willytec machine shows a mean coefficient of variation in vertical wear of 12%. Force measurements have revealed that in the beginning of the stylus/specimen contact phase the force impulse is 3-4 times higher during dynamic loading than during static loading. When correlating material properties to the wear results of 23 composite resins subjected to the Ivoclar method, some parameters could be identified and incorporated into a wear formula to predict wear with the Ivoclar method. A round robin test evaluating the wear of ten dental materials with five wear simulation methods showed that the results were not comparable, as all methods follow different wear testing concepts. All wear methods lack the evidence of their clinical relevance because prospective studies correlating in vitro with long-term in vivo results with identical materials are not available. For direct restorative materials, amalgam seems to be a realistic reference material. For indirect, namely crown and bridge materials, low strength ceramic is appropriate.
Simulation Research on Vehicle Active Suspension Controller Based on G1 Method
NASA Astrophysics Data System (ADS)
Li, Gen; Li, Hang; Zhang, Shuaiyang; Luo, Qiuhui
2017-09-01
Based on the order relation analysis method (G1 method), the optimal linear controller of vehicle active suspension is designed. The system of the main and passive suspension of the single wheel vehicle is modeled and the system input signal model is determined. Secondly, the system motion state space equation is established by the kinetic knowledge and the optimal linear controller design is completed with the optimal control theory. The weighting coefficient of the performance index coefficients of the main passive suspension is determined by the relational analysis method. Finally, the model is simulated in Simulink. The simulation results show that: the optimal weight value is determined by using the sequence relation analysis method under the condition of given road conditions, and the vehicle acceleration, suspension stroke and tire motion displacement are optimized to improve the comprehensive performance of the vehicle, and the active control is controlled within the requirements.
Tahmasebibirgani, Mohammad Javad; Maskani, Reza; Behrooz, Mohammad Ali; Zabihzadeh, Mansour; Shahbazian, Hojatollah; Fatahiasl, Jafar; Chegeni, Nahid
2017-01-01
Introduction In radiotherapy, megaelectron volt (MeV) electrons are employed for treatment of superficial cancers. Magnetic fields can be used for deflection and deformation of the electron flow. A magnetic field is composed of non-uniform permanent magnets. The primary electrons are not mono-energetic and completely parallel. Calculation of electron beam deflection requires using complex mathematical methods. In this study, a device was made to apply a magnetic field to an electron beam and the path of electrons was simulated in the magnetic field using finite element method. Methods A mini-applicator equipped with two neodymium permanent magnets was designed that enables tuning the distance between magnets. This device was placed in a standard applicator of Varian 2100 CD linear accelerator. The mini-applicator was simulated in CST Studio finite element software. Deflection angle and displacement of the electron beam was calculated after passing through the magnetic field. By determining a 2 to 5cm distance between two poles, various intensities of transverse magnetic field was created. The accelerator head was turned so that the deflected electrons became vertical to the water surface. To measure the displacement of the electron beam, EBT2 GafChromic films were employed. After being exposed, the films were scanned using HP G3010 reflection scanner and their optical density was extracted using programming in MATLAB environment. Displacement of the electron beam was compared with results of simulation after applying the magnetic field. Results Simulation results of the magnetic field showed good agreement with measured values. Maximum deflection angle for a 12 MeV beam was 32.9° and minimum deflection for 15 MeV was 12.1°. Measurement with the film showed precision of simulation in predicting the amount of displacement in the electron beam. Conclusion A magnetic mini-applicator was made and simulated using finite element method. Deflection angle and displacement of electron beam were calculated. With the method used in this study, a good prediction of the path of high-energy electrons was made before they entered the body. PMID:28607652
Lin, Tao; Sun, Huijun; Chen, Zhong; You, Rongyi; Zhong, Jianhui
2007-12-01
Diffusion weighting in MRI is commonly achieved with the pulsed-gradient spin-echo (PGSE) method. When combined with spin-warping image formation, this method often results in ghosts due to the sample's macroscopic motion. It has been shown experimentally (Kennedy and Zhong, MRM 2004;52:1-6) that these motion artifacts can be effectively eliminated by the distant dipolar field (DDF) method, which relies on the refocusing of spatially modulated transverse magnetization by the DDF within the sample itself. In this report, diffusion-weighted images (DWIs) using both DDF and PGSE methods in the presence of macroscopic sample motion were simulated. Numerical simulation results quantify the dependence of signals in DWI on several key motion parameters and demonstrate that the DDF DWIs are much less sensitive to macroscopic sample motion than the traditional PGSE DWIs. The results also show that the dipolar correlation distance (d(c)) can alter contrast in DDF DWIs. The simulated results are in good agreement with the experimental results reported previously.
Forecasting Lightning Threat using Cloud-Resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, Eugene W., Jr.; Goodman, Steven J.; LaCasse, Katherine M.; Cecil, Daniel J.
2008-01-01
Two new approaches are proposed and developed for making time and space dependent, quantitative short-term forecasts of lightning threat, and a blend of these approaches is devised that capitalizes on the strengths of each. The new methods are distinctive in that they are based entirely on the ice-phase hydrometeor fields generated by regional cloud-resolving numerical simulations, such as those produced by the WRF model. These methods are justified by established observational evidence linking aspects of the precipitating ice hydrometeor fields to total flash rates. The methods are straightforward and easy to implement, and offer an effective near-term alternative to the incorporation of complex and costly cloud electrification schemes into numerical models. One method is based on upward fluxes of precipitating ice hydrometeors in the mixed phase region at the-15 C level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domain-wide statistics of the peak values of simulated flash rate proxy fields against domain-wide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. Our blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Exploratory tests for selected North Alabama cases show that, because WRF can distinguish the general character of most convective events, our methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because the models tend to have more difficulty in predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models,the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of forecasts become available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
How to model supernovae in simulations of star and galaxy formation
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.; Wetzel, Andrew; Kereš, Dušan; Faucher-Giguère, Claude-André; Quataert, Eliot; Boylan-Kolchin, Michael; Murray, Norman; Hayward, Christopher C.; El-Badry, Kareem
2018-06-01
We study the implementation of mechanical feedback from supernovae (SNe) and stellar mass loss in galaxy simulations, within the Feedback In Realistic Environments (FIRE) project. We present the FIRE-2 algorithm for coupling mechanical feedback, which can be applied to any hydrodynamics method (e.g. fixed-grid, moving-mesh, and mesh-less methods), and black hole as well as stellar feedback. This algorithm ensures manifest conservation of mass, energy, and momentum, and avoids imprinting `preferred directions' on the ejecta. We show that it is critical to incorporate both momentum and thermal energy of mechanical ejecta in a self-consistent manner, accounting for SNe cooling radii when they are not resolved. Using idealized simulations of single SN explosions, we show that the FIRE-2 algorithm, independent of resolution, reproduces converged solutions in both energy and momentum. In contrast, common `fully thermal' (energy-dump) or `fully kinetic' (particle-kicking) schemes in the literature depend strongly on resolution: when applied at mass resolution ≳100 M⊙, they diverge by orders of magnitude from the converged solution. In galaxy-formation simulations, this divergence leads to orders-of-magnitude differences in galaxy properties, unless those models are adjusted in a resolution-dependent way. We show that all models that individually time-resolve SNe converge to the FIRE-2 solution at sufficiently high resolution (<100 M⊙). However, in both idealized single-SN simulations and cosmological galaxy-formation simulations, the FIRE-2 algorithm converges much faster than other sub-grid models without re-tuning parameters.
HERMES: Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy
Chan, Kimberly L.; Puts, Nicolaas A. J.; Schär, Michael; Barker, Peter B.; Edden, Richard A. E.
2017-01-01
Purpose To investigate a novel Hadamard-encoded spectral editing scheme and evaluate its performance in simultaneously quantifying N-acetyl aspartate (NAA) and N-acetyl aspartyl glutamate (NAAG) at 3 Tesla. Methods Editing pulses applied according to a Hadamard encoding scheme allow the simultaneous acquisition of multiple metabolites. The method, called HERMES (Hadamard Encoding and Reconstruction of MEGA-Edited Spectroscopy), was optimized to detect NAA and NAAG simultaneously using density-matrix simulations and validated in phantoms at 3T. In vivo data were acquired in the centrum semiovale of 12 normal subjects. The NAA:NAAG concentration ratio was determined by modeling in vivo data using simulated basis functions. Simulations were also performed for potentially coedited molecules with signals within the detected NAA/NAAG region. Results Simulations and phantom experiments show excellent segregation of NAA and NAAG signals into the intended spectra, with minimal crosstalk. Multiplet patterns show good agreement between simulations and phantom and in vivo data. In vivo measurements show that the relative peak intensities of the NAA and NAAG spectra are consistent with a NAA:NAAG concentration ratio of 4.22:1 in good agreement with literature. Simulations indicate some coediting of aspartate and glutathione near the detected region (editing efficiency: 4.5% and 78.2%, respectively, for the NAAG reconstruction and 5.1% and 19.5%, respectively, for the NAA reconstruction). Conclusion The simultaneous and separable detection of two otherwise overlapping metabolites using HERMES is possible at 3T. PMID:27089868
A novel method to measure regional muscle blood flow continuously using NIRS kinetics information
Nioka, Shoko; Kime, Ryotaro; Sunar, Ulas; Im, Joohee; Izzetoglu, Meltem; Zhang, Jun; Alacam, Burak; Chance, Britton
2006-01-01
Background This article introduces a novel method to continuously monitor regional muscle blood flow by using Near Infrared Spectroscopy (NIRS). We demonstrate the feasibility of the new method in two ways: (1) by applying this new method of determining blood flow to experimental NIRS data during exercise and ischemia; and, (2) by simulating muscle oxygenation and blood flow values using these newly developed equations during recovery from exercise and ischemia. Methods Deoxy (Hb) and oxyhemoglobin (HbO2), located in the blood ofthe skeletal muscle, carry two internal relationships between blood flow and oxygen consumption. One is a mass transfer principle and the other describes a relationship between oxygen consumption and Hb kinetics in a two-compartment model. To monitor blood flow continuously, we transfer these two relationships into two equations and calculate the blood flow with the differential information of HbO2 and Hb. In addition, these equations are used to simulate the relationship between blood flow and reoxygenation kinetics after cuff ischemia and a light exercise. Nine healthy subjects volunteered for the cuff ischemia, light arm exercise and arm exercise with cuff ischemia for the experimental study. Results Analysis of experimental data of both cuff ischemia and light exercise using the new equations show greater blood flow (four to six times more than resting values) during recovery, agreeing with previous findings. Further, the simulation and experimental studies of cuff ischemia and light exercise agree with each other. Conclusion We demonstrate the accuracy of this new method by showing that the blood flow obtained from the method agrees with previous data as well as with simulated data. We conclude that this novel continuous blood flow monitoring method can provide blood flow information non-invasively with NIRS. PMID:16704736
Using Communication Technology to Enhance Interprofessional Education Simulations
Shrader, Sarah; Shin, Tiffany; Heble, Annie; Kempin, Brian; Miller, Astyn; Patykiewicz, Nick
2016-01-01
Objective. To determine the impact of simulations using an alternative method of communication on students’ satisfaction, attitudes, confidence, and performance related to interprofessional communication. Design. One hundred sixty-three pharmacy students participated in a required applications-based capstone course. Students were randomly assigned to one of three interprofessional education (IPE) simulations with other health professions students using communication methods such as telephone, e-mail, and video conferencing. Assessment. Pharmacy students completed a validated survey instrument, Attitude Toward Healthcare Teams Scale (ATHCTS) prior to and after course participation. Significant positive changes occurred for 5 out of 20 items. Written reflection papers and student satisfaction surveys completed after participation showed positive themes and satisfaction. Course instructors evaluated student performance using rubrics for formative feedback. Conclusion. Implementation of IPE simulations using various methods of communication technology is an effective way for pharmacy schools to incorporate IPE into their curriculum. PMID:26941439
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griebel, M., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de; Rüttgers, A., E-mail: griebel@ins.uni-bonn.de, E-mail: ruettgers@ins.uni-bonn.de
The multiscale FENE model is applied to a 3D square-square contraction flow problem. For this purpose, the stochastic Brownian configuration field method (BCF) has been coupled with our fully parallelized three-dimensional Navier-Stokes solver NaSt3DGPF. The robustness of the BCF method enables the numerical simulation of high Deborah number flows for which most macroscopic methods suffer from stability issues. The results of our simulations are compared with that of experimental measurements from literature and show a very good agreement. In particular, flow phenomena such as a strong vortex enhancement, streamline divergence and a flow inversion for highly elastic flows are reproduced.more » Due to their computational complexity, our simulations require massively parallel computations. Using a domain decomposition approach with MPI, the implementation achieves excellent scale-up results for up to 128 processors.« less
Mixed-RKDG Finite Element Methods for the 2-D Hydrodynamic Model for Semiconductor Device Simulation
Chen, Zhangxin; Cockburn, Bernardo; Jerome, Joseph W.; ...
1995-01-01
In this paper we introduce a new method for numerically solving the equations of the hydrodynamic model for semiconductor devices in two space dimensions. The method combines a standard mixed finite element method, used to obtain directly an approximation to the electric field, with the so-called Runge-Kutta Discontinuous Galerkin (RKDG) method, originally devised for numerically solving multi-dimensional hyperbolic systems of conservation laws, which is applied here to the convective part of the equations. Numerical simulations showing the performance of the new method are displayed, and the results compared with those obtained by using Essentially Nonoscillatory (ENO) finite difference schemes. Frommore » the perspective of device modeling, these methods are robust, since they are capable of encompassing broad parameter ranges, including those for which shock formation is possible. The simulations presented here are for Gallium Arsenide at room temperature, but we have tested them much more generally with considerable success.« less
Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry
2013-01-01
The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258
Analysis of mixed model in gear transmission based on ADAMS
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2012-09-01
The traditional method of mechanical gear driving simulation includes gear pair method and solid to solid contact method. The former has higher solving efficiency but lower results accuracy; the latter usually obtains higher precision of results while the calculation process is complex, also it is not easy to converge. Currently, most of the researches are focused on the description of geometric models and the definition of boundary conditions. However, none of them can solve the problems fundamentally. To improve the simulation efficiency while ensure the results with high accuracy, a mixed model method which uses gear tooth profiles to take the place of the solid gear to simulate gear movement is presented under these circumstances. In the process of modeling, build the solid models of the mechanism in the SolidWorks firstly; Then collect the point coordinates of outline curves of the gear using SolidWorks API and create fit curves in Adams based on the point coordinates; Next, adjust the position of those fitting curves according to the position of the contact area; Finally, define the loading conditions, boundary conditions and simulation parameters. The method provides gear shape information by tooth profile curves; simulates the mesh process through tooth profile curve to curve contact and offer mass as well as inertia data via solid gear models. This simulation process combines the two models to complete the gear driving analysis. In order to verify the validity of the method presented, both theoretical derivation and numerical simulation on a runaway escapement are conducted. The results show that the computational efficiency of the mixed model method is 1.4 times over the traditional method which contains solid to solid contact. Meanwhile, the simulation results are more closely to theoretical calculations. Consequently, mixed model method has a high application value regarding to the study of the dynamics of gear mechanism.
Modelling rollover behaviour of exacavator-based forest machines
M.W. Veal; S.E. Taylor; Robert B. Rummer
2003-01-01
This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...
An Intelligent Tutor for Intrusion Detection on Computer Systems.
ERIC Educational Resources Information Center
Rowe, Neil C.; Schiavo, Sandra
1998-01-01
Describes an intelligent tutor incorporating a program using artificial-intelligence planning methods to generate realistic audit files reporting actions of simulated users and intruders of a UNIX system, and a program simulating the system afterwards that asks students to inspect the audit and fix problems. Experiments show that students using…
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Stochastic model search with binary outcomes for genome-wide association studies.
Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-06-01
The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
NASA Astrophysics Data System (ADS)
Tian, C.; Weng, J.; Liu, Y.
2017-11-01
The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.
Olson, Mark A; Lee, Michael S
2014-01-01
A central problem of computational structural biology is the refinement of modeled protein structures taken from either comparative modeling or knowledge-based methods. Simulations are commonly used to achieve higher resolution of the structures at the all-atom level, yet methodologies that consistently yield accurate results remain elusive. In this work, we provide an assessment of an adaptive temperature-based replica exchange simulation method where the temperature clients dynamically walk in temperature space to enrich their population and exchanges near steep energetic barriers. This approach is compared to earlier work of applying the conventional method of static temperature clients to refine a dataset of conformational decoys. Our results show that, while an adaptive method has many theoretical advantages over a static distribution of client temperatures, only limited improvement was gained from this strategy in excursions of the downhill refinement regime leading to an increase in the fraction of native contacts. To illustrate the sampling differences between the two simulation methods, energy landscapes are presented along with their temperature client profiles.
NASA Astrophysics Data System (ADS)
Kang, Seokkoo; Borazjani, Iman; Sotiropoulos, Fotis
2008-11-01
Unsteady 3D simulations of flows in natural streams is a challenging task due to the complexity of the bathymetry, the shallowness of the flow, and the presence of multiple nature- and man-made obstacles. This work is motivated by the need to develop a powerful numerical method for simulating such flows using coherent-structure-resolving turbulence models. We employ the curvilinear immersed boundary method of Ge and Sotiropoulos (Journal of Computational Physics, 2007) and address the critical issue of numerical efficiency in large aspect ratio computational domains and grids such as those encountered in long and shallow open channels. We show that the matrix-free Newton-Krylov method for solving the momentum equations coupled with an algebraic multigrid method with incomplete LU preconditioner for solving the Poisson equation yield a robust and efficient procedure for obtaining time-accurate solutions in such problems. We demonstrate the potential of the numerical approach by carrying out a direct numerical simulation of flow in a long and shallow meandering stream with multiple hydraulic structures.
NASA Astrophysics Data System (ADS)
Ceperley, Daniel Peter
This thesis presents a Finite-Difference Time-Domain simulation framework as well as both scientific observations and quantitative design data for emerging optical devices. These emerging applications required the development of simulation capabilities to carefully control numerical experimental conditions, isolate and quantifying specific scattering processes, and overcome memory and run-time limitations on large device structures. The framework consists of a new version 7 of TEMPEST and auxiliary tools implemented as Matlab scripts. In improving the geometry representation and absorbing boundary conditions in TEMPEST from v6 the accuracy has been sustained and key improvements have yielded application specific speed and accuracy improvements. These extensions include pulsed methods, PML for plasmon termination, and plasmon and scattered field sources. The auxiliary tools include application specific methods such as signal flow graphs of plasmon couplers, Bloch mode expansions of sub-wavelength grating waves, and back-propagation methods to characterize edge scattering in diffraction masks. Each application posed different numerical hurdles and physical questions for the simulation framework. The Terrestrial Planet Finder Coronagraph required accurate modeling of diffraction mask structures too large for solely FDTD analysis. This analysis was achieved through a combination of targeted TEMPEST simulations and full system simulator based on thin mask scalar diffraction models by Ball Aerospace for JPL. TEMPEST simulation showed that vertical sidewalls were the strongest scatterers, adding nearly 2lambda of light per mask edge, which could be reduced by 20° undercuts. TEMPEST assessment of coupling in rapid thermal annealing was complicated by extremely sub-wavelength features and fine meshes. Near 100% coupling and low variability was confirmed even in the presence of unidirectional dense metal gates. Accurate analysis of surface plasmon coupling efficiency by small surface features required capabilities to isolate these features and cleanly illuminate them with plasmons and plane-waves. These features were shown to have coupling cross-sections up to and slightly exceeding their physical size. Long run-times for TEMPEST simulations of finite length gratings were overcome with a signal flow graph method. With these methods a plasmon coupler with over a 10lambda 100% capture length was demonstrated. Simulation of 3D nano-particle arrays utilized TEMPEST v7's pulsed methods to minimize the number of multi-day simulations. These simulations led to the discovery that interstitial plasmons were responsible for resonant absorption and transmission but not reflection. Simulation of a sub-wavelength grating mirror using pulsed sources to map resonant spectra showed that neither coupled guided waves nor coupled isolated resonators accurately described the operation. However, a new model based on vertical propagation of lateral Bloch modes with zero phase progression efficiently characterized the device and provided principles for designing similar devices at other wavelengths.
Verifying the Simulation Hypothesis via Infinite Nested Universe Simulacrum Loops
NASA Astrophysics Data System (ADS)
Sharma, Vikrant
2017-01-01
The simulation hypothesis proposes that local reality exists as a simulacrum within a hypothetical computer's dimension. More specifically, Bostrom's trilemma proposes that the number of simulations an advanced 'posthuman' civilization could produce makes the proposition very likely. In this paper a hypothetical method to verify the simulation hypothesis is discussed using infinite regression applied to a new type of infinite loop. Assign dimension n to any computer in our present reality, where dimension signifies the hierarchical level in nested simulations our reality exists in. A computer simulating known reality would be dimension (n-1), and likewise a computer simulating an artificial reality, such as a video game, would be dimension (n +1). In this method, among others, four key assumptions are made about the nature of the original computer dimension n. Summations show that regressing such a reality infinitely will create convergence, implying that the verification of whether local reality is a grand simulation is feasible to detect with adequate compute capability. The action of reaching said convergence point halts the simulation of local reality. Sensitivities to the four assumptions and implications are discussed.
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
On the Measurements of Numerical Viscosity and Resistivity in Eulerian MHD Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rembiasz, Tomasz; Obergaulinger, Martin; Cerdá-Durán, Pablo
2017-06-01
We propose a simple ansatz for estimating the value of the numerical resistivity and the numerical viscosity of any Eulerian MHD code. We test this ansatz with the help of simulations of the propagation of (magneto)sonic waves, Alfvén waves, and the tearing mode (TM) instability using the MHD code Aenus. By comparing the simulation results with analytical solutions of the resistive-viscous MHD equations and an empirical ansatz for the growth rate of TMs, we measure the numerical viscosity and resistivity of Aenus. The comparison shows that the fast magnetosonic speed and wavelength are the characteristic velocity and length, respectively, ofmore » the aforementioned (relatively simple) systems. We also determine the dependence of the numerical viscosity and resistivity on the time integration method, the spatial reconstruction scheme and (to a lesser extent) the Riemann solver employed in the simulations. From the measured results, we infer the numerical resolution (as a function of the spatial reconstruction method) required to properly resolve the growth and saturation level of the magnetic field amplified by the magnetorotational instability in the post-collapsed core of massive stars. Our results show that it is most advantageous to resort to ultra-high-order methods (e.g., the ninth-order monotonicity-preserving method) to tackle this problem properly, in particular, in three-dimensional simulations.« less
The Design and Semi-Physical Simulation Test of Fault-Tolerant Controller for Aero Engine
NASA Astrophysics Data System (ADS)
Liu, Yuan; Zhang, Xin; Zhang, Tianhong
2017-11-01
A new fault-tolerant control method for aero engine is proposed, which can accurately diagnose the sensor fault by Kalman filter banks and reconstruct the signal by real-time on-board adaptive model combing with a simplified real-time model and an improved Kalman filter. In order to verify the feasibility of the method proposed, a semi-physical simulation experiment has been carried out. Besides the real I/O interfaces, controller hardware and the virtual plant model, semi-physical simulation system also contains real fuel system. Compared with the hardware-in-the-loop (HIL) simulation, semi-physical simulation system has a higher degree of confidence. In order to meet the needs of semi-physical simulation, a rapid prototyping controller with fault-tolerant control ability based on NI CompactRIO platform is designed and verified on the semi-physical simulation test platform. The result shows that the controller can realize the aero engine control safely and reliably with little influence on controller performance in the event of fault on sensor.
Quantum Dot Detectors with Plasmonic Structures
2015-05-15
plasmon polariton mode and a guided Fabry-Perot mode. The simulation method accomplished in this paper provides a generalized approach to optimize the...plasmon polariton (SPP) mode and a guided Fabry-Perot mode, that enhance x or y (along the polarization direction used in simulation) and z (along the...resulting from surface plasmon polariton and guided Fabry-Perot modes) are shown in the inset to Fig. 3. This figure also shows the simulated
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
Stochastic model search with binary outcomes for genome-wide association studies
Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-01-01
Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080
A method of emotion contagion for crowd evacuation
NASA Astrophysics Data System (ADS)
Cao, Mengxiao; Zhang, Guijuan; Wang, Mengsi; Lu, Dianjie; Liu, Hong
2017-10-01
The current evacuation model does not consider the impact of emotion and personality on crowd evacuation. Thus, there is large difference between evacuation results and the real-life behavior of the crowd. In order to generate more realistic crowd evacuation results, we present a method of emotion contagion for crowd evacuation. First, we combine OCEAN (Openness, Extroversion, Agreeableness, Neuroticism, Conscientiousness) model and SIS (Susceptible Infected Susceptible) model to construct the P-SIS (Personalized SIS) emotional contagion model. The P-SIS model shows the diversity of individuals in crowd effectively. Second, we couple the P-SIS model with the social force model to simulate emotional contagion on crowd evacuation. Finally, the photo-realistic rendering method is employed to obtain the animation of crowd evacuation. Experimental results show that our method can simulate crowd evacuation realistically and has guiding significance for crowd evacuation in the emergency circumstances.
Chung, Tae Nyoung; Kim, Sun Wook; You, Je Sung; Chung, Hyun Soo
2016-01-01
Objective Tube thoracostomy (TT) is a commonly performed intensive care procedure. Simulator training may be a good alternative method for TT training, compared with conventional methods such as apprenticeship and animal skills laboratory. However, there is insufficient evidence supporting use of a simulator. The aim of this study is to determine whether training with medical simulator is associated with faster TT process, compared to conventional training without simulator. Methods This is a simulation study. Eligible participants were emergency medicine residents with very few (≤3 times) TT experience. Participants were randomized to two groups: the conventional training group, and the simulator training group. While the simulator training group used the simulator to train TT, the conventional training group watched the instructor performing TT on a cadaver. After training, all participants performed a TT on a cadaver. The performance quality was measured as correct placement and time delay. Subjects were graded if they had difficulty on process. Results Estimated median procedure time was 228 seconds in the conventional training group and 75 seconds in the simulator training group, with statistical significance (P=0.040). The difficulty grading did not show any significant difference among groups (overall performance scale, 2 vs. 3; P=0.094). Conclusion Tube thoracostomy training with a medical simulator, when compared to no simulator training, is associated with a significantly faster procedure, when performed on a human cadaver. PMID:27752610
Pea, Rany; Dansereau, Jean; Caouette, Christiane; Cobetto, Nikita; Aubin, Carl-Éric
2018-05-01
Orthopedic braces made by Computer-Aided Design and Manufacturing and numerical simulation were shown to improve spinal deformities correction in adolescent idiopathic scoliosis while using less material. Simulations with BraceSim (Rodin4D, Groupe Lagarrigue, Bordeaux, France) require a sagittal radiograph, not always available. The objective was to develop an innovative modeling method based on a single coronal radiograph and surface topography, and assess the effectiveness of braces designed with this approach. With a patient coronal radiograph and a surface topography, the developed method allowed the 3D reconstruction of the spine, rib cage and pelvis using geometric models from a database and a free form deformation technique. The resulting 3D reconstruction converted into a finite element model was used to design and simulate the correction of a brace. The developed method was tested with data from ten scoliosis cases. The simulated correction was compared to analogous simulations performed with a 3D reconstruction built using two radiographs and surface topography (validated gold standard reference). There was an average difference of 1.4°/1.7° for the thoracic/lumbar Cobb angle, and 2.6°/5.5° for the kyphosis/lordosis between the developed reconstruction method and the reference. The average difference of the simulated correction was 2.8°/2.4° for the thoracic/lumbar Cobb angles and 3.5°/5.4° the kyphosis/lordosis. This study showed the feasibility to design and simulate brace corrections based on a new modeling method with a single coronal radiograph and surface topography. This innovative method could be used to improve brace designs, at a lesser radiation dose for the patient. Copyright © 2018 Elsevier Ltd. All rights reserved.
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
Zhao, Yong-guang; Ma, Ling-ling; Li, Chuan-rong; Zhu, Xiao-hua; Tang, Ling-li
2015-07-01
Due to the lack of enough spectral bands for multi-spectral sensor, it is difficult to reconstruct surface retlectance spectrum from finite spectral information acquired by multi-spectral instrument. Here, taking into full account of the heterogeneity of pixel from remote sensing image, a method is proposed to simulate hyperspectral data from multispectral data based on canopy radiation transfer model. This method first assumes the mixed pixels contain two types of land cover, i.e., vegetation and soil. The sensitive parameters of Soil-Leaf-Canopy (SLC) model and a soil ratio factor were retrieved from multi-spectral data based on Look-Up Table (LUT) technology. Then, by combined with a soil ratio factor, all the parameters were input into the SLC model to simulate the surface reflectance spectrum from 400 to 2 400 nm. Taking Landsat Enhanced Thematic Mapper Plus (ETM+) image as reference image, the surface reflectance spectrum was simulated. The simulated reflectance spectrum revealed different feature information of different surface types. To test the performance of this method, the simulated reflectance spectrum was convolved with the Landsat ETM + spectral response curves and Moderate Resolution Imaging Spectrometer (MODIS) spectral response curves to obtain the simulated Landsat ETM+ and MODIS image. Finally, the simulated Landsat ETM+ and MODIS images were compared with the observed Landsat ETM+ and MODIS images. The results generally showed high correction coefficients (Landsat: 0.90-0.99, MODIS: 0.74-0.85) between most simulated bands and observed bands and indicated that the simulated reflectance spectrum was well simulated and reliable.
Towards Virtual FLS: Development of a Peg Transfer Simulator
Arikatla, Venkata S; Ahn, Woojin; Sankaranarayanan, Ganesh; De, Suvranu
2014-01-01
Background Peg transfer is one of five tasks in the Fundamentals of Laparoscopic Surgery (FLS), program. We report the development and validation of a Virtual Basic Laparoscopic Skill Trainer-Peg Transfer (VBLaST-PT©) simulator for automatic real-time scoring and objective quantification of performance. Methods We have introduced new techniques in order to allow bi-manual manipulation of pegs and automatic scoring/evaluation while maintaining high quality of simulation. We performed a preliminary face and construct validation study with 22 subjects divided into two groups: experts (PGY 4–5, fellow and practicing surgeons) and novice (PGY 1–3). Results Face validation shows high scores for all the aspects of the simulation. A two-tailed Mann-Whitney U-test scores showed significant difference between the two groups on completion time (p=0.003), FLS score (p=0.002) and the VBLaST-PT© score (p=0.006). Conclusions VBLaST-PT© is a high quality virtual simulator that showed both face and construct validity. PMID:24030904
NASA Astrophysics Data System (ADS)
Kerschbaum, M.; Hopmann, C.
2016-06-01
The computationally efficient simulation of the progressive damage behaviour of continuous fibre reinforced plastics is still a challenging task with currently available computer aided engineering methods. This paper presents an original approach for an energy based continuum damage model which accounts for stress-/strain nonlinearities, transverse and shear stress interaction phenomena, quasi-plastic shear strain components, strain rate effects, regularised damage evolution and consideration of load reversal effects. The physically based modelling approach enables experimental determination of all parameters on ply level to avoid expensive inverse analysis procedures. The modelling strategy, implementation and verification of this model using commercially available explicit finite element software are detailed. The model is then applied to simulate the impact and penetration of carbon fibre reinforced cross-ply specimens with variation of the impact speed. The simulation results show that the presented approach enables a good representation of the force-/displacement curves and especially well agreement with the experimentally observed fracture patterns. In addition, the mesh dependency of the results were assessed for one impact case showing only very little change of the simulation results which emphasises the general applicability of the presented method.
Quantum Fragment Based ab Initio Molecular Dynamics for Proteins.
Liu, Jinfeng; Zhu, Tong; Wang, Xianwei; He, Xiao; Zhang, John Z H
2015-12-08
Developing ab initio molecular dynamics (AIMD) methods for practical application in protein dynamics is of significant interest. Due to the large size of biomolecules, applying standard quantum chemical methods to compute energies for dynamic simulation is computationally prohibitive. In this work, a fragment based ab initio molecular dynamics approach is presented for practical application in protein dynamics study. In this approach, the energy and forces of the protein are calculated by a recently developed electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method. For simulation in explicit solvent, mechanical embedding is introduced to treat protein interaction with explicit water molecules. This AIMD approach has been applied to MD simulations of a small benchmark protein Trpcage (with 20 residues and 304 atoms) in both the gas phase and in solution. Comparison to the simulation result using the AMBER force field shows that the AIMD gives a more stable protein structure in the simulation, indicating that quantum chemical energy is more reliable. Importantly, the present fragment-based AIMD simulation captures quantum effects including electrostatic polarization and charge transfer that are missing in standard classical MD simulations. The current approach is linear-scaling, trivially parallel, and applicable to performing the AIMD simulation of proteins with a large size.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
magnum.fe: A micromagnetic finite-element simulation code based on FEniCS
NASA Astrophysics Data System (ADS)
Abert, Claas; Exl, Lukas; Bruckner, Florian; Drews, André; Suess, Dieter
2013-11-01
We have developed a finite-element micromagnetic simulation code based on the FEniCS package called magnum.fe. Here we describe the numerical methods that are applied as well as their implementation with FEniCS. We apply a transformation method for the solution of the demagnetization-field problem. A semi-implicit weak formulation is used for the integration of the Landau-Lifshitz-Gilbert equation. Numerical experiments show the validity of simulation results. magnum.fe is open source and well documented. The broad feature range of the FEniCS package makes magnum.fe a good choice for the implementation of novel micromagnetic finite-element algorithms.
Probabilistic simulation of uncertainties in composite uniaxial strengths
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Stock, T. A.
1990-01-01
Probabilistic composite micromechanics methods are developed that simulate uncertainties in unidirectional fiber composite strengths. These methods are in the form of computational procedures using composite mechanics with Monte Carlo simulation. The variables for which uncertainties are accounted include constituent strengths and their respective scatter. A graphite/epoxy unidirectional composite (ply) is studied to illustrate the procedure and its effectiveness to formally estimate the probable scatter in the composite uniaxial strengths. The results show that ply longitudinal tensile and compressive, transverse compressive and intralaminar shear strengths are not sensitive to single fiber anomalies (breaks, intergacial disbonds, matrix microcracks); however, the ply transverse tensile strength is.
NASA Astrophysics Data System (ADS)
Nasehnejad, Maryam; Nabiyouni, G.; Gholipour Shahraki, Mehran
2018-03-01
In this study a 3D multi-particle diffusion limited aggregation method is employed to simulate growth of rough surfaces with fractal behavior in electrodeposition process. A deposition model is used in which the radial motion of the particles with probability P, competes with random motions with probability 1 - P. Thin films growth is simulated for different values of probability P (related to the electric field) and thickness of the layer(related to the number of deposited particles). The influence of these parameters on morphology, kinetic of roughening and the fractal dimension of the simulated surfaces has been investigated. The results show that the surface roughness increases with increasing the deposition time and scaling exponents exhibit a complex behavior which is called as anomalous scaling. It seems that in electrodeposition process, radial motion of the particles toward the growing seeds may be an important mechanism leading to anomalous scaling. The results also indicate that the larger values of probability P, results in smoother topography with more densely packed structure. We have suggested a dynamic scaling ansatz for interface width has a function of deposition time, scan length and probability. Two different methods are employed to evaluate the fractal dimension of the simulated surfaces which are "cube counting" and "roughness" methods. The results of both methods show that by increasing the probability P or decreasing the deposition time, the fractal dimension of the simulated surfaces is increased. All gained values for fractal dimensions are close to 2.5 in the diffusion limited aggregation model.
Exact Hybrid Particle/Population Simulation of Rule-Based Models of Biochemical Systems
Stover, Lori J.; Nair, Niketh S.; Faeder, James R.
2014-01-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This “network-free” approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of “partial network expansion” into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility. PMID:24699269
Exact hybrid particle/population simulation of rule-based models of biochemical systems.
Hogg, Justin S; Harris, Leonard A; Stover, Lori J; Nair, Niketh S; Faeder, James R
2014-04-01
Detailed modeling and simulation of biochemical systems is complicated by the problem of combinatorial complexity, an explosion in the number of species and reactions due to myriad protein-protein interactions and post-translational modifications. Rule-based modeling overcomes this problem by representing molecules as structured objects and encoding their interactions as pattern-based rules. This greatly simplifies the process of model specification, avoiding the tedious and error prone task of manually enumerating all species and reactions that can potentially exist in a system. From a simulation perspective, rule-based models can be expanded algorithmically into fully-enumerated reaction networks and simulated using a variety of network-based simulation methods, such as ordinary differential equations or Gillespie's algorithm, provided that the network is not exceedingly large. Alternatively, rule-based models can be simulated directly using particle-based kinetic Monte Carlo methods. This "network-free" approach produces exact stochastic trajectories with a computational cost that is independent of network size. However, memory and run time costs increase with the number of particles, limiting the size of system that can be feasibly simulated. Here, we present a hybrid particle/population simulation method that combines the best attributes of both the network-based and network-free approaches. The method takes as input a rule-based model and a user-specified subset of species to treat as population variables rather than as particles. The model is then transformed by a process of "partial network expansion" into a dynamically equivalent form that can be simulated using a population-adapted network-free simulator. The transformation method has been implemented within the open-source rule-based modeling platform BioNetGen, and resulting hybrid models can be simulated using the particle-based simulator NFsim. Performance tests show that significant memory savings can be achieved using the new approach and a monetary cost analysis provides a practical measure of its utility.
Modeling Electrokinetic Flows by the Smoothed Profile Method
Luo, Xian; Beskok, Ali; Karniadakis, George Em
2010-01-01
We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076
Rauscher, Sarah; Neale, Chris; Pomès, Régis
2009-10-13
Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Angulo, Raul E.
2016-01-01
N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.
Shahbazi-Gahrouei, Daryoush; Ayat, Saba
2012-01-01
Radioiodine therapy is an effective method for treating thyroid cancer carcinoma, but it has some affects on normal tissues, hence dosimetry of vital organs is important to weigh the risks and benefits of this method. The aim of this study is to measure the absorbed doses of important organs by Monte Carlo N Particle (MCNP) simulation and comparing the results of different methods of dosimetry by performing a t-paired test. To calculate the absorbed dose of thyroid, sternum, and cervical vertebra using the MCNP code, *F8 tally was used. Organs were simulated by using a neck phantom and Medical Internal Radiation Dosimetry (MIRD) method. Finally, the results of MCNP, MIRD, and Thermoluminescent dosimeter (TLD) measurements were compared by SPSS software. The absorbed dose obtained by Monte Carlo simulations for 100, 150, and 175 mCi administered 131I was found to be 388.0, 427.9, and 444.8 cGy for thyroid, 208.7, 230.1, and 239.3 cGy for sternum and 272.1, 299.9, and 312.1 cGy for cervical vertebra. The results of paired t-test were 0.24 for comparing TLD dosimetry and MIRD calculation, 0.80 for MCNP simulation and MIRD, and 0.19 for TLD and MCNP. The results showed no significant differences among three methods of Monte Carlo simulations, MIRD calculation and direct experimental dosimetry using TLD. PMID:23717806
NASA Astrophysics Data System (ADS)
Ambarita, H.; Ronowikarto, A. D.; Siregar, R. E. T.; Setyawan, E. Y.
2018-01-01
Desalination technologies is one of solutions for water scarcity. With using renewable energy, like solar energy, wind energy, and geothermal energy, expected will reduce the energy demand. This required study on the modeling and transport parameters determination of natural vacuum solar desalination by using computational fluid dynamics (CFD) method to simulate the model. A three-dimensional case, two-phase model was developed for evaporation-condensation phenomenon in natural vacuum solar desalination. The CFD simulation results were compared with the avalaible experimental data. The simulation results shows inthat there is a phenomenon of evaporation-condensation in an evaporation chamber. From the simulation, the fresh water productivity is 2.21 litre, and from the experimental is 2.1 litre. This study shows there’s an error of magnitude 0.4%. The CFD results also show that, vacuum pressure will degrade the saturation temperature of sea water.
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
Wimberley, Catriona J; Fischer, Kristina; Reilhac, Anthonin; Pichler, Bernd J; Gregoire, Marie Claude
2014-10-01
The partial saturation approach (PSA) is a simple, single injection experimental protocol that will estimate both B(avail) and appK(D) without the use of blood sampling. This makes it ideal for use in longitudinal studies of neurodegenerative diseases in the rodent. The aim of this study was to increase the range and applicability of the PSA by developing a data driven strategy for determining reliable regional estimates of receptor density (B(avail)) and in vivo affinity (1/appK(D)), and validate the strategy using a simulation model. The data driven method uses a time window guided by the dynamic equilibrium state of the system as opposed to using a static time window. To test the method, simulations of partial saturation experiments were generated and validated against experimental data. The experimental conditions simulated included a range of receptor occupancy levels and three different B(avail) and appK(D) values to mimic diseases states. Also the effect of using a reference region and typical PET noise on the stability and accuracy of the estimates was investigated. The investigations showed that the parameter estimates in a simulated healthy mouse, using the data driven method were within 10±30% of the simulated input for the range of occupancy levels simulated. Throughout all experimental conditions simulated, the accuracy and robustness of the estimates using the data driven method were much improved upon the typical method of using a static time window, especially at low receptor occupancy levels. Introducing a reference region caused a bias of approximately 10% over the range of occupancy levels. Based on extensive simulated experimental conditions, it was shown the data driven method provides accurate and precise estimates of B(avail) and appK(D) for a broader range of conditions compared to the original method. Copyright © 2014 Elsevier Inc. All rights reserved.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
NASA Technical Reports Server (NTRS)
Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)
2001-01-01
We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.
Constant pressure and temperature discrete-time Langevin molecular dynamics
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Farago, Oded
2014-11-01
We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.
Modeling the Lyα Forest in Collisionless Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Lukić, Zarija
2016-08-11
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less
MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.
2016-08-20
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
Li, Wei; Zhang, Min; Wang, Mingyu; Han, Zhantao; Liu, Jiankai; Chen, Zhezhou; Liu, Bo; Yan, Yan; Liu, Zhu
2018-06-01
Brownfield sites pollution and remediation is an urgent environmental issue worldwide. The screening and assessment of remedial alternatives is especially complex owing to its multiple criteria that involves technique, economy, and policy. To help the decision-makers selecting the remedial alternatives efficiently, the criteria framework conducted by the U.S. EPA is improved and a comprehensive method that integrates multiple criteria decision analysis (MCDA) with numerical simulation is conducted in this paper. The criteria framework is modified and classified into three categories: qualitative, semi-quantitative, and quantitative criteria, MCDA method, AHP-PROMETHEE (analytical hierarchy process-preference ranking organization method for enrichment evaluation) is used to determine the priority ranking of the remedial alternatives and the solute transport simulation is conducted to assess the remedial efficiency. A case study was present to demonstrate the screening method in a brownfield site in Cangzhou, northern China. The results show that the systematic method provides a reliable way to quantify the priority of the remedial alternatives.
2013-01-01
The accelerated molecular dynamics (aMD) method has recently been shown to enhance the sampling of biomolecules in molecular dynamics (MD) simulations, often by several orders of magnitude. Here, we describe an implementation of the aMD method for the OpenMM application layer that takes full advantage of graphics processing units (GPUs) computing. The aMD method is shown to work in combination with the AMOEBA polarizable force field (AMOEBA-aMD), allowing the simulation of long time-scale events with a polarizable force field. Benchmarks are provided to show that the AMOEBA-aMD method is efficiently implemented and produces accurate results in its standard parametrization. For the BPTI protein, we demonstrate that the protein structure described with AMOEBA remains stable even on the extended time scales accessed at high levels of accelerations. For the DNA repair metalloenzyme endonuclease IV, we show that the use of the AMOEBA force field is a significant improvement over fixed charged models for describing the enzyme active-site. The new AMOEBA-aMD method is publicly available (http://wiki.simtk.org/openmm/VirtualRepository) and promises to be interesting for studying complex systems that can benefit from both the use of a polarizable force field and enhanced sampling. PMID:24634618
Petascale turbulence simulation using a highly parallel fast multipole method on GPUs
NASA Astrophysics Data System (ADS)
Yokota, Rio; Barba, L. A.; Narumi, Tetsu; Yasuoka, Kenji
2013-03-01
This paper reports large-scale direct numerical simulations of homogeneous-isotropic fluid turbulence, achieving sustained performance of 1.08 petaflop/s on GPU hardware using single precision. The simulations use a vortex particle method to solve the Navier-Stokes equations, with a highly parallel fast multipole method (FMM) as numerical engine, and match the current record in mesh size for this application, a cube of 40963 computational points solved with a spectral method. The standard numerical approach used in this field is the pseudo-spectral method, relying on the FFT algorithm as the numerical engine. The particle-based simulations presented in this paper quantitatively match the kinetic energy spectrum obtained with a pseudo-spectral method, using a trusted code. In terms of parallel performance, weak scaling results show the FMM-based vortex method achieving 74% parallel efficiency on 4096 processes (one GPU per MPI process, 3 GPUs per node of the TSUBAME-2.0 system). The FFT-based spectral method is able to achieve just 14% parallel efficiency on the same number of MPI processes (using only CPU cores), due to the all-to-all communication pattern of the FFT algorithm. The calculation time for one time step was 108 s for the vortex method and 154 s for the spectral method, under these conditions. Computing with 69 billion particles, this work exceeds by an order of magnitude the largest vortex-method calculations to date.
Forecasting Lightning Threat using Cloud-resolving Model Simulations
NASA Technical Reports Server (NTRS)
McCaul, E. W., Jr.; Goodman, S. J.; LaCasse, K. M.; Cecil, D. J.
2009-01-01
As numerical forecasts capable of resolving individual convective clouds become more common, it is of interest to see if quantitative forecasts of lightning flash rate density are possible, based on fields computed by the numerical model. Previous observational research has shown robust relationships between observed lightning flash rates and inferred updraft and large precipitation ice fields in the mixed phase regions of storms, and that these relationships might allow simulated fields to serve as proxies for lightning flash rate density. It is shown in this paper that two simple proxy fields do indeed provide reasonable and cost-effective bases for creating time-evolving maps of predicted lightning flash rate density, judging from a series of diverse simulation case study events in North Alabama for which Lightning Mapping Array data provide ground truth. One method is based on the product of upward velocity and the mixing ratio of precipitating ice hydrometeors, modeled as graupel only, in the mixed phase region of storms at the -15\\dgc\\ level, while the second method is based on the vertically integrated amounts of ice hydrometeors in each model grid column. Each method can be calibrated by comparing domainwide statistics of the peak values of simulated flash rate proxy fields against domainwide peak total lightning flash rate density data from observations. Tests show that the first method is able to capture much of the temporal variability of the lightning threat, while the second method does a better job of depicting the areal coverage of the threat. A blended solution is designed to retain most of the temporal sensitivity of the first method, while adding the improved spatial coverage of the second. Weather Research and Forecast Model simulations of selected North Alabama cases show that this model can distinguish the general character and intensity of most convective events, and that the proposed methods show promise as a means of generating quantitatively realistic fields of lightning threat. However, because models tend to have more difficulty in correctly predicting the instantaneous placement of storms, forecasts of the detailed location of the lightning threat based on single simulations can be in error. Although these model shortcomings presently limit the precision of lightning threat forecasts from individual runs of current generation models, the techniques proposed herein should continue to be applicable as newer and more accurate physically-based model versions, physical parameterizations, initialization techniques and ensembles of cloud-allowing forecasts become available.
A Review of Numerical Simulation and Analytical Modeling for Medical Devices Safety in MRI
Kabil, J.; Belguerras, L.; Trattnig, S.; Pasquier, C.; Missoffe, A.
2016-01-01
Summary Objectives To review past and present challenges and ongoing trends in numerical simulation for MRI (Magnetic Resonance Imaging) safety evaluation of medical devices. Methods A wide literature review on numerical and analytical simulation on simple or complex medical devices in MRI electromagnetic fields shows the evolutions through time and a growing concern for MRI safety over the years. Major issues and achievements are described, as well as current trends and perspectives in this research field. Results Numerical simulation of medical devices is constantly evolving, supported by calculation methods now well-established. Implants with simple geometry can often be simulated in a computational human model, but one issue remaining today is the experimental validation of these human models. A great concern is to assess RF heating on implants too complex to be traditionally simulated, like pacemaker leads. Thus, ongoing researches focus on alternative hybrids methods, both numerical and experimental, with for example a transfer function method. For the static field and gradient fields, analytical models can be used for dimensioning simple implants shapes, but limited for complex geometries that cannot be studied with simplifying assumptions. Conclusions Numerical simulation is an essential tool for MRI safety testing of medical devices. The main issues remain the accuracy of simulations compared to real life and the studies of complex devices; but as the research field is constantly evolving, some promising ideas are now under investigation to take up the challenges. PMID:27830244
Magnetic fields end-face effect investigation of HTS bulk over PMG with 3D-modeling numerical method
NASA Astrophysics Data System (ADS)
Qin, Yujie; Lu, Yiyun
2015-09-01
In this paper, the magnetic fields end-face effect of high temperature superconducting (HTS) bulk over a permanent magnetic guideway (PMG) is researched with 3D-modeling numerical method. The electromagnetic behavior of the bulk is simulated using finite element method (FEM). The framework is formulated by the magnetic field vector method (H-method). A superconducting levitation system composed of one rectangular HTS bulk and one infinite long PMG is successfully investigated using the proposed method. The simulation results show that for finite geometrical HTS bulk, even the applied magnetic field is only distributed in x-y plane, the magnetic field component Hz which is along the z-axis can be observed interior the HTS bulk.
Hybrid Method for Power Control Simulation of a Single Fluid Plasma Thruster
NASA Astrophysics Data System (ADS)
Jaisankar, S.; Sheshadri, T. S.
2018-05-01
Propulsive plasma flow through a cylindrical-conical diverging thruster is simulated by a power controlled hybrid method to obtain the basic flow, thermodynamic and electromagnetic variables. Simulation is based on a single fluid model with electromagnetics being described by the equations of potential Poisson, Maxwell and the Ohm's law while the compressible fluid dynamics by the Navier Stokes in cylindrical form. The proposed method solved the electromagnetics and fluid dynamics separately, both to segregate the two prominent scales for an efficient computation and for the delivery of voltage controlled rated power. The magnetic transport is solved for steady state while fluid dynamics is allowed to evolve in time along with an electromagnetic source using schemes based on generalized finite difference discretization. The multistep methodology with power control is employed for simulating fully ionized propulsive flow of argon plasma through the thruster. Numerical solution shows convergence of every part of the solver including grid stability causing the multistep hybrid method to converge for a rated power delivery. Simulation results are reasonably in agreement with the reported physics of plasma flow in the thruster thus indicating the potential utility of this hybrid computational framework, especially when single fluid approximation of plasma is relevant.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
Su, Peiran; Eri, Qitai; Wang, Qiang
2014-04-10
Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.
Coarse-grained modeling of crystal growth and polymorphism of a model pharmaceutical molecule.
Mandal, Taraknath; Marson, Ryan L; Larson, Ronald G
2016-10-04
We describe a systematic coarse-graining method to study crystallization and predict possible polymorphs of small organic molecules. In this method, a coarse-grained (CG) force field is obtained by inverse-Boltzmann iteration from the radial distribution function of atomistic simulations of the known crystal. With the force field obtained by this method, we show that CG simulations of the drug phenytoin predict growth of a crystalline slab from a melt of phenytoin, allowing determination of the fastest-growing surface, as well as giving the correct lattice parameters and crystal morphology. By applying meta-dynamics to the coarse-grained model, a new crystalline form of phenytoin (monoclinic, space group P2 1 ) was predicted which is different from the experimentally known crystal structure (orthorhombic, space group Pna2 1 ). Atomistic simulations and quantum calculations then showed the polymorph to be meta-stable at ambient temperature and pressure, and thermodynamically more stable than the conventional orthorhombic crystal at high pressure. The results suggest an efficient route to study crystal growth of small organic molecules that could also be useful for identification of possible polymorphs as well.
CO2 capture in amine solutions: modelling and simulations with non-empirical methods
NASA Astrophysics Data System (ADS)
Andreoni, Wanda; Pietrucci, Fabio
2016-12-01
Absorption in aqueous amine solutions is the most advanced technology for the capture of CO2, although suffering from drawbacks that do not allow exploitation on large scale. The search for optimum solvents has been pursued with empirical methods and has also motivated a number of computational approaches over the last decade. However, a deeper level of understanding of the relevant chemical reactions in solution is required so as to contribute to this effort. We present here a brief critical overview of the most recent applications of computer simulations using ab initio methods. Comparison of their outcome shows a strong dependence on the structural models employed to represent the molecular systems in solution and on the strategy used to simulate the reactions. In particular, the results of very recent ab initio molecular dynamics augmented with metadynamics are summarized, showing the crucial role of water, which has been so far strongly underestimated both in the calculations and in the interpretation of experimental data. Indications are given for advances in computational approaches that are necessary if meant to contribute to the rational design of new solvents.
NASA Astrophysics Data System (ADS)
Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Fuhrmann, D.; Gherghel-Lascu, A.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2016-04-01
The energy reconstruction at KASCADE-Grande is based on a combination of the shower size and the total muon number, which are both estimated for each individual air shower event. We present investigations where we employed a second method to reconstruct the primary energy using S(500), which are the charged particle densities inferred with the KASCADE-Grande detector at a distance of 500 m from the shower axis. We considered the attenuation of inclined showers by applying the "Constant Intensity Cut" method and we employed a simulation-derived calibration to convert the recorded S(500) into primary energy. We observed a systematic shift in the S(500)-derived energy compared with previously reported results obtained using the standard reconstruction technique. However, a comparison of the two methods based on simulated and measured data showed that this shift only appeared in the measured data. Our investigations showed that this shift was caused mainly by the inadequate description of the shape of the lateral density distribution in the simulations.
The many-body Wigner Monte Carlo method for time-dependent ab-initio quantum simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg; Dimov, I.
2014-09-15
The aim of ab-initio approaches is the simulation of many-body quantum systems from the first principles of quantum mechanics. These methods are traditionally based on the many-body Schrödinger equation which represents an incredible mathematical challenge. In this paper, we introduce the many-body Wigner Monte Carlo method in the context of distinguishable particles and in the absence of spin-dependent effects. Despite these restrictions, the method has several advantages. First of all, the Wigner formalism is intuitive, as it is based on the concept of a quasi-distribution function. Secondly, the Monte Carlo numerical approach allows scalability on parallel machines that is practicallymore » unachievable by means of other techniques based on finite difference or finite element methods. Finally, this method allows time-dependent ab-initio simulations of strongly correlated quantum systems. In order to validate our many-body Wigner Monte Carlo method, as a case study we simulate a relatively simple system consisting of two particles in several different situations. We first start from two non-interacting free Gaussian wave packets. We, then, proceed with the inclusion of an external potential barrier, and we conclude by simulating two entangled (i.e. correlated) particles. The results show how, in the case of negligible spin-dependent effects, the many-body Wigner Monte Carlo method provides an efficient and reliable tool to study the time-dependent evolution of quantum systems composed of distinguishable particles.« less
Holmes, T J; Liu, Y H
1989-11-15
A maximum likelihood based iterative algorithm adapted from nuclear medicine imaging for noncoherent optical imaging was presented in a previous publication with some initial computer-simulation testing. This algorithm is identical in form to that previously derived in a different way by W. H. Richardson "Bayesian-Based Iterative Method of Image Restoration," J. Opt. Soc. Am. 62, 55-59 (1972) and L. B. Lucy "An Iterative Technique for the Rectification of Observed Distributions," Astron. J. 79, 745-765 (1974). Foreseen applications include superresolution and 3-D fluorescence microscopy. This paper presents further simulation testing of this algorithm and a preliminary experiment with a defocused camera. The simulations show quantified resolution improvement as a function of iteration number, and they show qualitatively the trend in limitations on restored resolution when noise is present in the data. Also shown are results of a simulation in restoring missing-cone information for 3-D imaging. Conclusions are in support of the feasibility of using these methods with real systems, while computational cost and timing estimates indicate that it should be realistic to implement these methods. Itis suggested in the Appendix that future extensions to the maximum likelihood based derivation of this algorithm will address some of the limitations that are experienced with the nonextended form of the algorithm presented here.
Feasibility of flare gas reformation to practical energy in Farashband gas refinery: no gas flaring.
Rahimpour, Mohammad Reaza; Jokar, Seyyed Mohammad
2012-03-30
A suggested method for controlling the level of hazardous materials in the atmosphere is prevention of combustion in flare. In this work, three methods are proposed to recover flare gas instead of conventional gas-burning in flare at the Farashband gas refinery. These methods aim to minimize environmental and economical disadvantages of burning flare gas. The proposed methods are: (1) gas to liquid (GTL) production, (2) electricity generation with a gas turbine and, (3) compression and injection into the refinery pipelines. To find the most suitable method, the refinery units that send gas to the flare as well as the required equipment for the three aforementioned methods are simulated. These simulations determine the amount of flare gas, the number of GTL barrels, the power generated by the gas turbine and the required compression horsepower. The results of simulation show that 563 barrels/day of valuable GTL products is produced by the first method. The second method provides 25 MW electricity and the third method provides a compressed natural gas with 129 bar pressure for injection to the refinery pipelines. In addition, the economics of flare gas recovery methods are studied and compared. The results show that for the 4.176MMSCFD of gas flared from the Farashband gas refinery, the electricity production gives the highest rate of return (ROR), the lowest payback period, the highest annual profit and mild capital investment. Therefore, the electricity production is the superior method economically. Copyright © 2012 Elsevier B.V. All rights reserved.
Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud
2017-01-01
In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.
Comparison of texture synthesis methods for content generation in ultrasound simulation for training
NASA Astrophysics Data System (ADS)
Mattausch, Oliver; Ren, Elizabeth; Bajka, Michael; Vanhoey, Kenneth; Goksel, Orcun
2017-03-01
Navigation and interpretation of ultrasound (US) images require substantial expertise, the training of which can be aided by virtual-reality simulators. However, a major challenge in creating plausible simulated US images is the generation of realistic ultrasound speckle. Since typical ultrasound speckle exhibits many properties of Markov Random Fields, it is conceivable to use texture synthesis for generating plausible US appearance. In this work, we investigate popular classes of texture synthesis methods for generating realistic US content. In a user study, we evaluate their performance for reproducing homogeneous tissue regions in B-mode US images from small image samples of similar tissue and report the best-performing synthesis methods. We further show that regression trees can be used on speckle texture features to learn a predictor for US realism.
Simulated Data for High Temperature Composite Design
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2006-01-01
The paper describes an effective formal method that can be used to simulate design properties for composites that is inclusive of all the effects that influence those properties. This effective simulation method is integrated computer codes that include composite micromechanics, composite macromechanics, laminate theory, structural analysis, and multi-factor interaction model. Demonstration of the method includes sample examples for static, thermal, and fracture reliability for a unidirectional metal matrix composite as well as rupture strength and fatigue strength for a high temperature super alloy. Typical results obtained for a unidirectional composite show that the thermal properties are more sensitive to internal local damage, the longitudinal properties degrade slowly with temperature, the transverse and shear properties degrade rapidly with temperature as do rupture strength and fatigue strength for super alloys.
NASA Astrophysics Data System (ADS)
Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.
2017-11-01
We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.
JASMINE design and method of data reduction
NASA Astrophysics Data System (ADS)
Yamada, Yoshiyuki; Gouda, Naoteru; Yano, Taihei; Kobayashi, Yukiyasu; Niwa, Yoshito
2008-07-01
Japan Astrometry Satellite Mission for Infrared Exploration (JASMINE) aims to construct a map of the Galactic bulge with 10 μ arc sec accuracy. We use z-band CCD for avoiding dust absorption, and observe about 10 × 20 degrees area around the Galactic bulge region. Because the stellar density is very high, each FOVs can be combined with high accuracy. With 5 years observation, we will construct 10 μ arc sec accurate map. In this poster, I will show the observation strategy, design of JASMINE hardware, reduction scheme, and error budget. We also construct simulation software named JASMINE Simulator. We also show the simulation results and design of software.
Teaching Business Simulation Games: Comparing Achievements Frontal Teaching vs. eLearning
NASA Astrophysics Data System (ADS)
Bregman, David; Keinan, Gila; Korman, Arik; Raanan, Yossi
This paper addresses the issue of comparing results achieved by students taught the same course but in two drastically different - a regular, frontal method and an eLearning method. The subject taught required intensive communications among the students, thus making the eLearning students, a priori, less likely to do well in it. The research, comparing the achievements of students in a business simulation game over three semesters, shows that the use of eLearning method did not result in any differences in performance, grades or cooperation, thus strengthening the case for using eLearning in this type of course.
Optimization of global model composed of radial basis functions using the term-ranking approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun
2014-03-15
A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.
Iterative repair for scheduling and rescheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Deale, Michael
1991-01-01
An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.
The least-action method, cold dark matter, and omega
NASA Technical Reports Server (NTRS)
Dunn, A. M.; Laflamme, R.
1995-01-01
Peebles has suggested an interesting technique, called the least-action method, to trace positions of galaxies back in time. This method applied on the Local Group galaxies seems to indicate that we live in an omega approximately = 0.1 universe. We have studied a cold dark matter (CDM) N-body simulation with omega = 0.2 and H = 50 km/s/Mpc and compared trajectories traced back by the least-action method with the ones given by the center of mass of the CDM halos. We show that the agreement between these sets of trajectories is at best qualitative. We also show that the line-of-sight peculiar velocities of halos are underestimated. This discrepancy is due to orphans, i.e., CDM particles which do not end up in halos. We vary the value of omega in the least-action method until the line-of-sight velocities agree with the CDM ones. The best value for this omega underestimates one of the CDM simulations by a factor of 4-5.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Yanmei; Li, Xinli; Bai, Yan
The measurement of multiphase flow parameters is of great importance in a wide range of industries. In the measurement of multiphase, the signals from the sensors are extremely weak and often buried in strong background noise. It is thus desirable to develop effective signal processing techniques that can detect the weak signal from the sensor outputs. In this paper, two methods, i.e., lock-in-amplifier (LIA) and improved Duffing chaotic oscillator are compared to detect and process the weak signal. For sinusoidal signal buried in noise, the correlation detection with sinusoidal reference signal is simulated by using LIA. The improved Duffing chaoticmore » oscillator method, which based on the Wigner transformation, can restore the signal waveform and detect the frequency. Two methods are combined to detect and extract the weak signal. Simulation results show the effectiveness and accuracy of the proposed improved method. The comparative analysis shows that the improved Duffing chaotic oscillator method can restrain noise strongly since it is sensitive to initial conditions.« less
Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels
NASA Astrophysics Data System (ADS)
Sturm, Ralf; Fischer, S.
2015-12-01
For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.
Stable lattice Boltzmann model for Maxwell equations in media
NASA Astrophysics Data System (ADS)
Hauser, A.; Verhey, J. L.
2017-12-01
The present work shows a method for stable simulations via the lattice Boltzmann (LB) model for electromagnetic waves (EM) transiting homogeneous media. LB models for such media were already presented in the literature, but they suffer from numerical instability when the media transitions are sharp. We use one of these models in the limit of pure vacuum derived from Liu and Yan [Appl. Math. Model. 38, 1710 (2014), 10.1016/j.apm.2013.09.009] and apply an extension that treats the effects of polarization and magnetization separately. We show simulations of simple examples in which EM waves travel into media to quantify error scaling, stability, accuracy, and time scaling. For conductive media, we use the Strang splitting and check the simulations accuracy at the example of the skin effect. Like pure EM propagation, the error for the static limits, which are constructed with a current density added in a first-order scheme, can be less than 1 % . The presented method is an easily implemented alternative for the stabilization of simulation for EM waves propagating in spatially complex structured media properties and arbitrary transitions.
NASA Technical Reports Server (NTRS)
Ostroff, Aaron J.
1998-01-01
This paper contains a study of two methods for use in a generic nonlinear simulation tool that could be used to determine achievable control dynamics and control power requirements while performing perfect tracking maneuvers over the entire flight envelope. The two methods are NDI (nonlinear dynamic inversion) and the SOFFT(Stochastic Optimal Feedforward and Feedback Technology) feedforward control structure. Equivalent discrete and continuous SOFFT feedforward controllers have been developed. These equivalent forms clearly show that the closed-loop plant model loop is a plant inversion and is the same as the NDI formulation. The main difference is that the NDI formulation has a closed-loop controller structure whereas SOFFT uses an open-loop command model. Continuous, discrete, and hybrid controller structures have been developed and integrated into the formulation. Linear simulation results show that seven different configurations all give essentially the same response, with the NDI hybrid being slightly different. The SOFFT controller gave better tracking performance compared to the NDI controller when a nonlinear saturation element was added. Future plans include evaluation using a nonlinear simulation.
Semantic World Modelling and Data Management in a 4d Forest Simulation and Information System
NASA Astrophysics Data System (ADS)
Roßmann, J.; Hoppen, M.; Bücken, A.
2013-08-01
Various types of 3D simulation applications benefit from realistic forest models. They range from flight simulators for entertainment to harvester simulators for training and tree growth simulations for research and planning. Our 4D forest simulation and information system integrates the necessary methods for data extraction, modelling and management. Using modern methods of semantic world modelling, tree data can efficiently be extracted from remote sensing data. The derived forest models contain position, height, crown volume, type and diameter of each tree. This data is modelled using GML-based data models to assure compatibility and exchangeability. A flexible approach for database synchronization is used to manage the data and provide caching, persistence, a central communication hub for change distribution, and a versioning mechanism. Combining various simulation techniques and data versioning, the 4D forest simulation and information system can provide applications with "both directions" of the fourth dimension. Our paper outlines the current state, new developments, and integration of tree extraction, data modelling, and data management. It also shows several applications realized with the system.
Ross, Alastair J; Anderson, Janet E; Kodate, Naonori; Thomas, Libby; Thompson, Kellie; Thomas, Beth; Key, Suzie; Jensen, Heidi; Schiff, Rebekah; Jaye, Peter
2013-06-01
This paper describes the evaluation of a 2-day simulation training programme for staff designed to improve teamwork and inpatient care and compassion in an older persons' unit. The programme was designed to improve inpatient care for older people by using mixed modality simulation exercises to enhance teamwork and empathetic and compassionate care. Healthcare professionals took part in: (a) a 1-day human patient simulation course with six scenarios and (b) a 1-day ward-based simulation course involving five 1-h exercises with integrated debriefing. A mixed methods evaluation included observations of the programme, precourse and postcourse confidence rating scales and follow-up interviews with staff at 7-9 weeks post-training. Observations showed enjoyment of the course but some anxiety and apprehension about the simulation environment. Staff self-confidence improved after human patient simulation (t=9; df=56; p<0.001) and ward-based exercises (t=9.3; df=76; p<0.001). Thematic analysis of interview data showed learning in teamwork and patient care. Participants thought that simulation had been beneficial for team practices such as calling for help and verbalising concerns and for improved interaction with patients. Areas to address in future include widening participation across multi-disciplinary teams, enhancing post-training support and exploring further which aspects of the programme enhance compassion and care of older persons. The study demonstrated that simulation is an effective method for encouraging dignified care and compassion for older persons by teaching team skills and empathetic and sensitive communication with patients and relatives.
Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method
NASA Technical Reports Server (NTRS)
Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)
2002-01-01
A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofschen, S.; Wolff, I.
1996-08-01
Time-domain simulation results of two-dimensional (2-D) planar waveguide finite-difference time-domain (FDTD) analysis are normally analyzed using Fourier transform. The introduced method of time series analysis to extract propagation and attenuation constants reduces the desired computation time drastically. Additionally, a nonequidistant discretization together with an adequate excitation technique is used to reduce the number of spatial grid points. Therefore, it is possible to reduce the number of spatial grid points. Therefore, it is possible to simulate normal- and superconducting planar waveguide structures with very thin conductors and small dimensions, as they are used in MMIC technology. The simulation results are comparedmore » with measurements and show good agreement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamberaj, Hiqmet, E-mail: hkamberaj@ibu.edu.mk
In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4, 5, andmore » 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.« less
NASA Astrophysics Data System (ADS)
Kamberaj, Hiqmet
2015-09-01
In this paper, we present a new method based on swarm particle social intelligence for use in replica exchange molecular dynamics simulations. In this method, the replicas (representing the different system configurations) are allowed communicating with each other through the individual and social knowledge, in additional to considering them as a collection of real particles interacting through the Newtonian forces. The new method is based on the modification of the equations of motion in such way that the replicas are driven towards the global energy minimum. The method was tested for the Lennard-Jones clusters of N = 4, 5, and 6 atoms. Our results showed that the new method is more efficient than the conventional replica exchange method under the same practical conditions. In particular, the new method performed better on optimizing the distribution of the replicas among the thermostats with time and, in addition, ergodic convergence is observed to be faster. We also introduce a weighted histogram analysis method allowing analyzing the data from simulations by combining data from all of the replicas and rigorously removing the inserted bias.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geisler-Moroder, David; Lee, Eleanor S.; Ward, Gregory J.
2016-08-29
The Five-Phase Method (5-pm) for simulating complex fenestration systems with Radiance is validated against field measurements. The capability of the method to predict workplane illuminances, vertical sensor illuminances, and glare indices derived from captured and rendered high dynamic range (HDR) images is investigated. To be able to accurately represent the direct sun part of the daylight not only in sensor point simulations, but also in renderings of interior scenes, the 5-pm calculation procedure was extended. The validation shows that the 5-pm is superior to the Three-Phase Method for predicting horizontal and vertical illuminance sensor values as well as glare indicesmore » derived from rendered images. Even with input data from global and diffuse horizontal irradiance measurements only, daylight glare probability (DGP) values can be predicted within 10% error of measured values for most situations.« less
High effective inverse dynamics modelling for dual-arm robot
NASA Astrophysics Data System (ADS)
Shen, Haoyu; Liu, Yanli; Wu, Hongtao
2018-05-01
To deal with the problem of inverse dynamics modelling for dual arm robot, a recursive inverse dynamics modelling method based on decoupled natural orthogonal complement is presented. In this model, the concepts and methods of Decoupled Natural Orthogonal Complement matrices are used to eliminate the constraint forces in the Newton-Euler kinematic equations, and the screws is used to express the kinematic and dynamics variables. On this basis, the paper has developed a special simulation program with symbol software of Mathematica and conducted a simulation research on the a dual-arm robot. Simulation results show that the proposed method based on decoupled natural orthogonal complement can save an enormous amount of CPU time that was spent in computing compared with the recursive Newton-Euler kinematic equations and the results is correct and reasonable, which can verify the reliability and efficiency of the method.
Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán
2018-04-05
Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.
Simulation of Triple Oxidation Ditch Wastewater Treatment Process
NASA Astrophysics Data System (ADS)
Yang, Yue; Zhang, Jinsong; Liu, Lixiang; Hu, Yongfeng; Xu, Ziming
2010-11-01
This paper presented the modeling mechanism and method of a sewage treatment system. A triple oxidation ditch process of a WWTP was simulated based on activated sludge model ASM2D with GPS-X software. In order to identify the adequate model structure to be implemented into the GPS-X environment, the oxidation ditch was divided into several completely stirred tank reactors depended on the distribution of aeration devices and dissolved oxygen concentration. The removal efficiency of COD, ammonia nitrogen, total nitrogen, total phosphorus and SS were simulated by GPS-X software with influent quality data of this WWTP from June to August 2009, to investigate the differences between the simulated results and the actual results. The results showed that, the simulated values could well reflect the actual condition of the triple oxidation ditch process. Mathematical modeling method was appropriate in effluent quality predicting and process optimizing.
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Genetic Adaptive Control for PZT Actuators
NASA Technical Reports Server (NTRS)
Kim, Jeongwook; Stover, Shelley K.; Madisetti, Vijay K.
1995-01-01
A piezoelectric transducer (PZT) is capable of providing linear motion if controlled correctly and could provide a replacement for traditional heavy and large servo systems using motors. This paper focuses on a genetic model reference adaptive control technique (GMRAC) for a PZT which is moving a mirror where the goal is to keep the mirror velocity constant. Genetic Algorithms (GAs) are an integral part of the GMRAC technique acting as the search engine for an optimal PID controller. Two methods are suggested to control the actuator in this research. The first one is to change the PID parameters and the other is to add an additional reference input in the system. The simulation results of these two methods are compared. Simulated Annealing (SA) is also used to solve the problem. Simulation results of GAs and SA are compared after simulation. GAs show the best result according to the simulation results. The entire model is designed using the Mathworks' Simulink tool.
Liu, Bo; Zhang, Lifu; Zhang, Xia; Zhang, Bing; Tong, Qingxi
2009-01-01
Data simulation is widely used in remote sensing to produce imagery for a new sensor in the design stage, for scale issues of some special applications, or for testing of novel algorithms. Hyperspectral data could provide more abundant information than traditional multispectral data and thus greatly extend the range of remote sensing applications. Unfortunately, hyperspectral data are much more difficult and expensive to acquire and were not available prior to the development of operational hyperspectral instruments, while large amounts of accumulated multispectral data have been collected around the world over the past several decades. Therefore, it is reasonable to examine means of using these multispectral data to simulate or construct hyperspectral data, especially in situations where hyperspectral data are necessary but hard to acquire. Here, a method based on spectral reconstruction is proposed to simulate hyperspectral data (Hyperion data) from multispectral Advanced Land Imager data (ALI data). This method involves extraction of the inherent information of source data and reassignment to newly simulated data. A total of 106 bands of Hyperion data were simulated from ALI data covering the same area. To evaluate this method, we compare the simulated and original Hyperion data by visual interpretation, statistical comparison, and classification. The results generally showed good performance of this method and indicated that most bands were well simulated, and the information both preserved and presented well. This makes it possible to simulate hyperspectral data from multispectral data for testing the performance of algorithms, extend the use of multispectral data and help the design of a virtual sensor. PMID:22574064
NASA Astrophysics Data System (ADS)
Edwards, T.
2015-12-01
Modelling Antarctic marine ice sheet instability (MISI) - the potential for sustained grounding line retreat along downsloping bedrock - is very challenging because high resolution at the grounding line is required for reliable simulation. Assessing modelling uncertainties is even more difficult, because such models are very computationally expensive, restricting the number of simulations that can be performed. Quantifying uncertainty in future Antarctic instability has therefore so far been limited. There are several ways to tackle this problem, including: Simulating a small domain, to reduce expense and allow the use of ensemble methods; Parameterising response of the grounding line to the onset of MISI, for the same reasons; Emulating the simulator with a statistical model, to explore the impacts of uncertainties more thoroughly; Substituting physical models with expert-elicited statistical distributions. Methods 2-4 require rigorous testing against observations and high resolution models to have confidence in their results. We use all four to examine the dependence of MISI in the Amundsen Sea Embayment (ASE) on uncertain model inputs, including bedrock topography, ice viscosity, basal friction, model structure (sliding law and treatment of grounding line migration) and MISI triggers (including basal melting and risk of ice shelf collapse). We compare simulations from a 3000 member ensemble with GRISLI (methods 2, 4) with a 284 member ensemble from BISICLES (method 1) and also use emulation (method 3). Results from the two ensembles show similarities, despite very different model structures and ensemble designs. Basal friction and topography have a large effect on the extent of grounding line retreat, and the sliding law strongly modifies sea level contributions through changes in the rate and extent of grounding line retreat and the rate of ice thinning. Over 50 years, MISI in the ASE gives up to 1.1 mm/year (95% quantile) SLE in GRISLI (calibrated with ASE mass losses in a Bayesian framework), and up to 1.2 mm/year SLE (95% quantile) in the 270 completed BISICLES simulations (no calibration). We will show preliminary results emulating the models, calibrating with observations, and comparing them to assess structural uncertainty. We use these to improve MISI projections for the whole continent.
Puszka, Agathe; Hervé, Lionel; Planat-Chrétien, Anne; Koenig, Anne; Derouard, Jacques; Dinten, Jean-Marc
2013-01-01
We show how to apply the Mellin-Laplace transform to process time-resolved reflectance measurements for diffuse optical tomography. We illustrate this method on simulated signals incorporating the main sources of experimental noise and suggest how to fine-tune the method in order to detect the deepest absorbing inclusions and optimize their localization in depth, depending on the dynamic range of the measurement. To finish, we apply this method to measurements acquired with a setup including a femtosecond laser, photomultipliers and a time-correlated single photon counting board. Simulations and experiments are illustrated for a probe featuring the interfiber distance of 1.5 cm and show the potential of time-resolved techniques for imaging absorption contrast in depth with this geometry. PMID:23577292
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Molecular dynamics and Monte Carlo simulations for protein-ligand binding and inhibitor design.
Cole, Daniel J; Tirado-Rives, Julian; Jorgensen, William L
2015-05-01
Non-nucleoside inhibitors of HIV reverse transcriptase are an important component of treatment against HIV infection. Novel inhibitors are sought that increase potency against variants that contain the Tyr181Cys mutation. Molecular dynamics based free energy perturbation simulations have been run to study factors that contribute to protein-ligand binding, and the results are compared with those from previous Monte Carlo based simulations and activity data. Predictions of protein-ligand binding modes are very consistent for the two simulation methods; the accord is attributed to the use of an enhanced sampling protocol. The Tyr181Cys binding pocket supports large, hydrophobic substituents, which is in good agreement with experiment. Although some discrepancies exist between the results of the two simulation methods and experiment, free energy perturbation simulations can be used to rapidly test small molecules for gains in binding affinity. Free energy perturbation methods show promise in providing fast, reliable and accurate data that can be used to complement experiment in lead optimization projects. This article is part of a Special Issue entitled "Recent developments of molecular dynamics". Copyright © 2014 Elsevier B.V. All rights reserved.
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.
Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M
2014-12-01
In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.
Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain
NASA Astrophysics Data System (ADS)
Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.
2018-04-01
The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.
Thermodynamics of reformulated automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-06-01
Two methods for predicting Reid vapor pressure (Rvp) and initial vapor emissions of reformulated gasoline blends that contain one or more oxygenated compounds show excellent agreement with experimental data. In the first method, method A, D-86 distillation data for gasoline blends are used for predicting Rvp from a simulation of the mini dry vapor pressure equivalent (Dvpe) experiment. The other method, method B, relies on analytical information (PIANO analyses) of the base gasoline and uses classical thermodynamics for simulating the same Rvp equivalent (Rvpe) mini experiment. Method B also predicts composition and other properties for the fuel`s initial vapor emission.more » Method B, although complex, is more useful in that is can predict properties of blends without a D-86 distillation. An important aspect of method B is its capability to predict composition of initial vapor emissions from gasoline blends. Thus, it offers a powerful tool to planners of gasoline blending. Method B uses theoretically sound formulas, rigorous thermodynamic routines and uses data and correlations of physical properties that are in the public domain. Results indicate that predictions made with both methods agree very well with experimental values of Dvpe. Computer simulation methods were programmed and tested.« less
Label-free, single-object sensing with a microring resonator: FDTD simulation.
Nguyen, Dan T; Norwood, Robert A
2013-01-14
Label-free, single-object sensing with a microring resonator is investigated numerically using the finite difference time-domain (FDTD) method. A pulse with ultra-wide bandwidth that spans over several resonant modes of the ring and of the sensing object is used for simulation, enabling a single-shot simulation of the microring sensing. The FDTD simulation not only can describe the circulation of the light in a whispering-gallery-mode (WGM) microring and multiple interactions between the light and the sensing object, but also other important factors of the sensing system, such as scattering and radiation losses. The FDTD results show that the simulation can yield a resonant shift of the WGM cavity modes. Furthermore, it can also extract eigenmodes of the sensing object, and therefore information from deep inside the object. The simulation method is not only suitable for a single object (single molecule, nano-, micro-scale particle) but can be extended to the problem of multiple objects as well.
Multi-scale simulations of space problems with iPIC3D
NASA Astrophysics Data System (ADS)
Lapenta, Giovanni; Bettarini, Lapo; Markidis, Stefano
The implicit Particle-in-Cell method for the computer simulation of space plasma, and its im-plementation in a three-dimensional parallel code, called iPIC3D, are presented. The implicit integration in time of the Vlasov-Maxwell system removes the numerical stability constraints and enables kinetic plasma simulations at magnetohydrodynamics scales. Simulations of mag-netic reconnection in plasma are presented to show the effectiveness of the algorithm. In particular we will show a number of simulations done for large scale 3D systems using the physical mass ratio for Hydrogen. Most notably one simulation treats kinetically a box of tens of Earth radii in each direction and was conducted using about 16000 processors of the Pleiades NASA computer. The work is conducted in collaboration with the MMS-IDS theory team from University of Colorado (M. Goldman, D. Newman and L. Andersson). Reference: Stefano Markidis, Giovanni Lapenta, Rizwan-uddin Multi-scale simulations of plasma with iPIC3D Mathematics and Computers in Simulation, Available online 17 October 2009, http://dx.doi.org/10.1016/j.matcom.2009.08.038
Carreón, Gustavo; Gershenson, Carlos; Pineda, Luis A
2017-01-01
The equal headway instability-the fact that a configuration with regular time intervals between vehicles tends to be volatile-is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system's data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger's inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems.
Gershenson, Carlos; Pineda, Luis A.
2017-01-01
The equal headway instability—the fact that a configuration with regular time intervals between vehicles tends to be volatile—is a common regulation problem in public transportation systems. An unsatisfactory regulation results in low efficiency and possible collapses of the service. Computational simulations have shown that self-organizing methods can regulate the headway adaptively beyond the theoretical optimum. In this work, we develop a computer simulation for metro systems fed with real data from the Mexico City Metro to test the current regulatory method with a novel self-organizing approach. The current model considers overall system’s data such as minimum and maximum waiting times at stations, while the self-organizing method regulates the headway in a decentralized manner using local information such as the passenger’s inflow and the positions of neighboring trains. The simulation shows that the self-organizing method improves the performance over the current one as it adapts to environmental changes at the timescale they occur. The correlation between the simulation of the current model and empirical observations carried out in the Mexico City Metro provides a base to calculate the expected performance of the self-organizing method in case it is implemented in the real system. We also performed a pilot study at the Balderas station to regulate the alighting and boarding of passengers through guide signs on platforms. The analysis of empirical data shows a delay reduction of the waiting time of trains at stations. Finally, we provide recommendations to improve public transportation systems. PMID:29287120
A non-hydrostatic flat-bottom ocean model entirely based on Fourier expansion
NASA Astrophysics Data System (ADS)
Wirth, A.
2005-01-01
We show how to implement free-slip and no-slip boundary conditions in a three dimensional Boussinesq flat-bottom ocean model based on Fourier expansion. Our method is inspired by the immersed or virtual boundary technique in which the effect of boundaries on the flow field is modeled by a virtual force field. Our method, however, explicitly depletes the velocity on the boundary induced by the pressure, while at the same time respecting the incompressibility of the flow field. Spurious spatial oscillations remain at a negligible level in the simulated flow field when using our technique and no filtering of the flow field is necessary. We furthermore show that by using the method presented here the residual velocities at the boundaries are easily reduced to a negligible value. This stands in contradistinction to previous calculations using the immersed or virtual boundary technique. The efficiency is demonstrated by simulating a Rayleigh impulsive flow, for which the time evolution of the simulated flow is compared to an analytic solution, and a three dimensional Boussinesq simulation of ocean convection. The second instance is taken form a well studied oceanographic context: A free slip boundary condition is applied on the upper surface, the modeled sea surface, and a no-slip boundary condition to the lower boundary, the modeled ocean floor. Convergence properties of the method are investigated by solving a two dimensional stationary problem at different spatial resolutions. The work presented here is restricted to a flat ocean floor. Extensions of our method to ocean models with a realistic topography are discussed.
NASA Astrophysics Data System (ADS)
Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.
2005-11-01
Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.
Simulation methods to estimate design power: an overview for applied research
2011-01-01
Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447
Progress Towards a Cartesian Cut-Cell Method for Viscous Compressible Flow
NASA Technical Reports Server (NTRS)
Berger, Marsha; Aftosmis, Michael J.
2011-01-01
The proposed paper reports advances in developing a method for high Reynolds number compressible viscous flow simulations using a Cartesian cut-cell method with embedded boundaries. This preliminary work focuses on accuracy of the discretization near solid wall boundaries. A model problem is used to investigate the accuracy of various difference stencils for second derivatives and to guide development of the discretization of the viscous terms in the Navier-Stokes equations. Near walls, quadratic reconstruction in the wall-normal direction is used to mitigate mesh irregularity and yields smooth skin friction distributions along the body. Multigrid performance is demonstrated using second-order coarse grid operators combined with second-order restriction and prolongation operators. Preliminary verification and validation for the method is demonstrated using flat-plate and airfoil examples at compressible Mach numbers. Simulations of flow on laminar and turbulent flat plates show skin friction and velocity profiles compared with those from boundary-layer theory. Airfoil simulations are performed at laminar and turbulent Reynolds numbers with results compared to both other simulations and experimental data
A Simulation Study of Methods for Selecting Subgroup-Specific Doses in Phase I Trials
Morita, Satoshi; Thall, Peter F.; Takeda, Kentaro
2016-01-01
Summary Patient heterogeneity may complicate dose-finding in phase I clinical trials if the dose-toxicity curves differ between subgroups. Conducting separate trials within subgroups may lead to infeasibly small sample sizes in subgroups having low prevalence. Alternatively, it is not obvious how to conduct a single trial while accounting for heterogeneity. To address this problem, we consider a generalization of the continual reassessment method (O’Quigley, et al., 1990) based on a hierarchical Bayesian dose-toxicity model that borrows strength between subgroups under the assumption that the subgroups are exchangeable. We evaluate a design using this model that includes subgroup-specific dose selection and safety rules. A simulation study is presented that includes comparison of this method to three alternative approaches, based on non-hierarchical models, that make different types of assumptions about within-subgroup dose-toxicity curves. The simulations show that the hierarchical model-based method is recommended in settings where the dose-toxicity curves are exchangeable between subgroups. We present practical guidelines for application, and provide computer programs for trial simulation and conduct. PMID:28111916
Variable speed limit strategies analysis with link transmission model on urban expressway
NASA Astrophysics Data System (ADS)
Li, Shubin; Cao, Danni
2018-02-01
The variable speed limit (VSL) is a kind of active traffic management method. Most of the strategies are used in the expressway traffic flow control in order to ensure traffic safety. However, the urban expressway system is the main artery, carrying most traffic pressure. It has similar traffic characteristics with the expressways between cities. In this paper, the improved link transmission model (LTM) combined with VSL strategies is proposed, based on the urban expressway network. The model can simulate the movement of the vehicles and the shock wave, and well balance the relationship between the amount of calculation and accuracy. Furthermore, the optimal VSL strategy can be proposed based on the simulation method. It can provide management strategies for managers. Finally, a simple example is given to illustrate the model and method. The selected indexes are the average density, the average speed and the average flow on the traffic network in the simulation. The simulation results show that the proposed model and method are feasible. The VSL strategy can effectively alleviate traffic congestion in some cases, and greatly promote the efficiency of the transportation system.
Full Core TREAT Kinetics Demonstration Using Rattlesnake/BISON Coupling Within MAMMOTH
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortensi, Javier; DeHart, Mark D.; Gleicher, Frederick N.
2015-08-01
This report summarizes key aspects of research in evaluation of modeling needs for TREAT transient simulation. Using a measured TREAT critical measurement and a transient for a small, experimentally simplified core, Rattlesnake and MAMMOTH simulations are performed building from simple infinite media to a full core model. Cross sections processing methods are evaluated, various homogenization approaches are assessed and the neutronic behavior of the core studied to determine key modeling aspects. The simulation of the minimum critical core with the diffusion solver shows very good agreement with the reference Monte Carlo simulation and the experiment. The full core transient simulationmore » with thermal feedback shows a significantly lower power peak compared to the documented experimental measurement, which is not unexpected in the early stages of model development.« less
Fidelity assessment of a UH-60A simulation on the NASA Ames vertical motion simulator
NASA Technical Reports Server (NTRS)
Atencio, Adolph, Jr.
1993-01-01
Helicopter handling qualities research requires that a ground-based simulation be a high-fidelity representation of the actual helicopter, especially over the frequency range of the investigation. This experiment was performed to assess the current capability to simulate the UH-60A Black Hawk helicopter on the Vertical Motion Simulator (VMS) at NASA Ames, to develop a methodology for assessing the fidelity of a simulation, and to find the causes for lack of fidelity. The approach used was to compare the simulation to the flight vehicle for a series of tasks performed in flight and in the simulator. The results show that subjective handling qualities ratings from flight to simulator overlap, and the mathematical model matches the UH-60A helicopter very well over the range of frequencies critical to handling qualities evaluation. Pilot comments, however, indicate a need for improvement in the perceptual fidelity of the simulation in the areas of motion and visual cuing. The methodology used to make the fidelity assessment proved useful in showing differences in pilot work load and strategy, but additional work is needed to refine objective methods for determining causes of lack of fidelity.
OpenACC performance for simulating 2D radial dambreak using FVM HLLE flux
NASA Astrophysics Data System (ADS)
Gunawan, P. H.; Pahlevi, M. R.
2018-03-01
The aim of this paper is to investigate the performances of openACC platform for computing 2D radial dambreak. Here, the shallow water equation will be used to describe and simulate 2D radial dambreak with finite volume method (FVM) using HLLE flux. OpenACC is a parallel computing platform based on GPU cores. Indeed, from this research this platform is used to minimize computational time on the numerical scheme performance. The results show the using OpenACC, the computational time is reduced. For the dry and wet radial dambreak simulations using 2048 grids, the computational time of parallel is obtained 575.984 s and 584.830 s respectively for both simulations. These results show the successful of OpenACC when they are compared with the serial time of dry and wet radial dambreak simulations which are collected 28047.500 s and 29269.40 s respectively.
NASA Astrophysics Data System (ADS)
Li, Ping; Wang, Weiwei; Zhang, Chenxi; An, Yong; Song, Zhijian
2016-07-01
Intraoperative brain retraction leads to a misalignment between the intraoperative positions of the brain structures and their previous positions, as determined from preoperative images. In vitro swine brain sample uniaxial tests showed that the mechanical response of brain tissue to compression and extension could be described by the hyper-viscoelasticity theory. The brain retraction caused by the mechanical process is a combination of brain tissue compression and extension. In this paper, we first constructed a hyper-viscoelastic framework based on the extended finite element method (XFEM) to simulate intraoperative brain retraction. To explore its effectiveness, we then applied this framework to an in vivo brain retraction simulation. The simulation strictly followed the clinical scenario, in which seven swine were subjected to brain retraction. Our experimental results showed that the hyper-viscoelastic XFEM framework is capable of simulating intraoperative brain retraction and improving the navigation accuracy of an image-guided neurosurgery system (IGNS).
NASA Astrophysics Data System (ADS)
Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.
2012-02-01
Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.
Numerical simulation and analysis for low-frequency rock physics measurements
NASA Astrophysics Data System (ADS)
Dong, Chunhui; Tang, Genyang; Wang, Shangxu; He, Yanxiao
2017-10-01
In recent years, several experimental methods have been introduced to measure the elastic parameters of rocks in the relatively low-frequency range, such as differential acoustic resonance spectroscopy (DARS) and stress-strain measurement. It is necessary to verify the validity and feasibility of the applied measurement method and to quantify the sources and levels of measurement error. Relying solely on the laboratory measurements, however, we cannot evaluate the complete wavefield variation in the apparatus. Numerical simulations of elastic wave propagation, on the other hand, are used to model the wavefield distribution and physical processes in the measurement systems, and to verify the measurement theory and analyze the measurement results. In this paper we provide a numerical simulation method to investigate the acoustic waveform response of the DARS system and the quasi-static responses of the stress-strain system, both of which use axisymmetric apparatus. We applied this method to parameterize the properties of the rock samples, the sample locations and the sensor (hydrophone and strain gauges) locations and simulate the measurement results, i.e. resonance frequencies and axial and radial strains on the sample surface, from the modeled wavefield following the physical experiments. Rock physical parameters were estimated by inversion or direct processing of these data, and showed a perfect match with the true values, thus verifying the validity of the experimental measurements. Error analysis was also conducted for the DARS system with 18 numerical samples, and the sources and levels of error are discussed. In particular, we propose an inversion method for estimating both density and compressibility of these samples. The modeled results also showed fairly good agreement with the real experiment results, justifying the effectiveness and feasibility of our modeling method.
The Researches on Damage Detection Method for Truss Structures
NASA Astrophysics Data System (ADS)
Wang, Meng Hong; Cao, Xiao Nan
2018-06-01
This paper presents an effective method to detect damage in truss structures. Numerical simulation and experimental analysis were carried out on a damaged truss structure under instantaneous excitation. The ideal excitation point and appropriate hammering method were determined to extract time domain signals under two working conditions. The frequency response function and principal component analysis were used for data processing, and the angle between the frequency response function vectors was selected as a damage index to ascertain the location of a damaged bar in the truss structure. In the numerical simulation, the time domain signal of all nodes was extracted to determine the location of the damaged bar. In the experimental analysis, the time domain signal of a portion of the nodes was extracted on the basis of an optimal sensor placement method based on the node strain energy coefficient. The results of the numerical simulation and experimental analysis showed that the damage detection method based on the frequency response function and principal component analysis could locate the damaged bar accurately.
NASA Astrophysics Data System (ADS)
Plante, Ianik; Devroye, Luc
2017-10-01
Ionizing radiation interacts with the water molecules of the tissues mostly by ionizations and excitations, which result in the formation of the radiation track structure and the creation of radiolytic species such as H.,.OH, H2, H2O2, and e-aq. After their creation, these species diffuse and may chemically react with the neighboring species and with the molecules of the medium. Therefore radiation chemistry is of great importance in radiation biology. As the chemical species are not distributed homogeneously, the use of conventional models of homogeneous reactions cannot completely describe the reaction kinetics of the particles. Actually, many simulations of radiation chemistry are done using the Independent Reaction Time (IRT) method, which is a very fast technique to calculate radiochemical yields but which do not calculate the positions of the radiolytic species as a function of time. Step-by-step (SBS) methods, which are able to provide such information, have been used only sparsely because these are time-consuming in terms of calculation. Recent improvements in computer performance now allow the regular use of the SBS method in radiation chemistry. The SBS and IRT methods are both based on the Green's functions of the diffusion equation (GFDE). In this paper, several sampling algorithms of the GFDE and for the IRT method are presented. We show that the IRT and SBS methods are exactly equivalent for 2-particles systems for diffusion and partially diffusion-controlled reactions between non-interacting particles. We also show that the results obtained with the SBS simulation method with periodic boundary conditions are in agreement with the predictions by classical reaction kinetics theory, which is an important step towards using this method for modelling of biochemical networks and metabolic pathways involved in oxidative stress. Finally, the first simulation results obtained with the code RITRACKS (Relativistic Ion Tracks) are presented.
One step linear reconstruction method for continuous wave diffuse optical tomography
NASA Astrophysics Data System (ADS)
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Cluster mass inference via random field theory.
Zhang, Hui; Nichols, Thomas E; Johnson, Timothy D
2009-01-01
Cluster extent and voxel intensity are two widely used statistics in neuroimaging inference. Cluster extent is sensitive to spatially extended signals while voxel intensity is better for intense but focal signals. In order to leverage strength from both statistics, several nonparametric permutation methods have been proposed to combine the two methods. Simulation studies have shown that of the different cluster permutation methods, the cluster mass statistic is generally the best. However, to date, there is no parametric cluster mass inference available. In this paper, we propose a cluster mass inference method based on random field theory (RFT). We develop this method for Gaussian images, evaluate it on Gaussian and Gaussianized t-statistic images and investigate its statistical properties via simulation studies and real data. Simulation results show that the method is valid under the null hypothesis and demonstrate that it can be more powerful than the cluster extent inference method. Further, analyses with a single subject and a group fMRI dataset demonstrate better power than traditional cluster size inference, and good accuracy relative to a gold-standard permutation test.
3D Lagrangian VPM: simulations of the near-wake of an actuator disc and horizontal axis wind turbine
NASA Astrophysics Data System (ADS)
Berdowski, T.; Ferreira, C.; Walther, J.
2016-09-01
The application of a 3-dimensional Lagrangian vortex particle method has been assessed for modelling the near-wake of an axisymmetrical actuator disc and 3-bladed horizontal axis wind turbine with prescribed circulation from the MEXICO (Model EXperiments In COntrolled conditions) experiment. The method was developed in the framework of the open- source Parallel Particle-Mesh library for handling the efficient data-parallelism on a CPU (Central Processing Unit) cluster, and utilized a O(N log N)-type fast multipole method for computational acceleration. Simulations with the actuator disc resulted in a wake expansion, velocity deficit profile, and induction factor that showed a close agreement with theoretical, numerical, and experimental results from literature. Also the shear layer expansion was present; the Kelvin-Helmholtz instability in the shear layer was triggered due to the round-off limitations of a numerical method, but this instability was delayed to beyond 1 diameter downstream due to the particle smoothing. Simulations with the 3-bladed turbine demonstrated that a purely 3-dimensional flow representation is challenging to model with particles. The manifestation of local complex flow structures of highly stretched vortices made the simulation unstable, but this was successfully counteracted by the application of a particle strength exchange scheme. The axial and radial velocity profile over the near wake have been compared to that of the original MEXICO experiment, which showed close agreement between results.
Design and application of 3D-printed stepless beam modulators in proton therapy
NASA Astrophysics Data System (ADS)
Lindsay, C.; Kumlin, J.; Martinez, D. M.; Jirasek, A.; Hoehr, C.
2016-06-01
A new method for the design of stepless beam modulators for proton therapy is described and verified. Simulations of the classic designs are compared against the stepless method for various modulation widths which are clinically applicable in proton eye therapy. Three modulator wheels were printed using a Stratasys Objet30 3D printer. The resulting depth dose distributions showed improved uniformity over the classic stepped designs. Simulated results imply a possible improvement in distal penumbra width; however, more accurate measurements are needed to fully verify this effect. Lastly, simulations were done to model bio-equivalence to Co-60 cell kill. A wheel was successfully designed to flatten this metric.
Analysis of biomolecular solvation sites by 3D-RISM theory.
Sindhikara, Daniel J; Hirata, Fumio
2013-06-06
We derive, implement, and apply equilibrium solvation site analysis for biomolecules. Our method utilizes 3D-RISM calculations to quickly obtain equilibrium solvent distributions without either necessity of simulation or limits of solvent sampling. Our analysis of these distributions extracts highest likelihood poses of solvent as well as localized entropies, enthalpies, and solvation free energies. We demonstrate our method on a structure of HIV-1 protease where excellent structural and thermodynamic data are available for comparison. Our results, obtained within minutes, show systematic agreement with available experimental data. Further, our results are in good agreement with established simulation-based solvent analysis methods. This method can be used not only for visual analysis of active site solvation but also for virtual screening methods and experimental refinement.
NASA Astrophysics Data System (ADS)
Jin, Zhongkun; Yin, Yao; Liu, Bilong
2016-03-01
The finite element method is often used to investigate the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate. Since the anechoic coating contains cavities, the number of grid nodes of a periodic unit cell is usually large. An equivalent modulus method is proposed to reduce the large amount of nodes by calculating an equivalent homogeneous layer. Applications of this method in several models show that the method can well predict the sound absorption coefficient of such structure in a wide frequency range. Based on the simulation results, the sound absorption performance of such structure and the influences of different backings on the first absorption peak are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu
We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that thismore » scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.« less
Discrete Molecular Dynamics Approach to the Study of Disordered and Aggregating Proteins.
Emperador, Agustí; Orozco, Modesto
2017-03-14
We present a refinement of the Coarse Grained PACSAB force field for Discrete Molecular Dynamics (DMD) simulations of proteins in aqueous conditions. As the original version, the refined method provides good representation of the structure and dynamics of folded proteins but provides much better representations of a variety of unfolded proteins, including some very large, impossible to analyze by atomistic simulation methods. The PACSAB/DMD method also reproduces accurately aggregation properties, providing good pictures of the structural ensembles of proteins showing a folded core and an intrinsically disordered region. The combination of accuracy and speed makes the method presented here a good alternative for the exploration of unstructured protein systems.
Study on electrochemical corrosion mechanism of steel foot of insulators for HVDC lines
NASA Astrophysics Data System (ADS)
Zheng, Weihua; Sun, Xiaoyu; Fan, Youping
2017-09-01
The main content of this paper is the mechanism of electrochemical corrosion of insulator steel foot in HVDC transmission line, and summarizes five commonly used artificial electrochemical corrosion accelerated test methods in the world. Various methods are analyzed and compared, and the simulation test of electrochemical corrosion of insulator steel feet is carried out by water jet method. The experimental results show that the experimental environment simulated by water jet method is close to the real environment. And the three suspension modes of insulators in the actual operation, the most serious corrosion of the V type suspension hardware, followed by the tension string suspension, and the linear string corrosion rate is the slowest.
A Novel Actuator for Simulation of Epidural Anesthesia and Other Needle Insertion Procedures
Magill, John C.; Byl, Marten F.; Hinds, Michael F.; Agassounon, William; Pratt, Stephen D.; Hess, Philip E.
2010-01-01
Introduction When navigating a needle from skin to epidural space, a skilled clinician maintains a mental model of the anatomy and uses the various forms of haptic and visual feedback to track the location of the needle tip. Simulating the procedure requires an actuator that can produce the feel of tissue layers even as the needle direction changes from the ideal path. Methods A new actuator and algorithm architecture simulate forces associated with passing a needle through varying tissue layers. The actuator uses a set of cables to suspend a needle holder. The cables are wound onto spools controlled by brushless motors. An electromagnetic tracker is used to monitor the position of the needle tip. Results Novice and expert clinicians simulated epidural insertion with the simulator. Preliminary depth-time curves show that the user responds to changes in tissue properties as the needle is advanced. Some discrepancy in clinician response indicates that the feel of the simulator is sensitive to technique, thus perfect tissue property simulation has not been achieved. Conclusions The new simulator is able to approximately reproduce properties of complex multilayer tissue structures, including fine-scale texture. Methods for improving fidelity of the simulation are identified. PMID:20651481
Comparison of normalization methods for differential gene expression analysis in RNA-Seq experiments
Maza, Elie; Frasse, Pierre; Senin, Pavel; Bouzayen, Mondher; Zouine, Mohamed
2013-01-01
In recent years, RNA-Seq technologies became a powerful tool for transcriptome studies. However, computational methods dedicated to the analysis of high-throughput sequencing data are yet to be standardized. In particular, it is known that the choice of a normalization procedure leads to a great variability in results of differential gene expression analysis. The present study compares the most widespread normalization procedures and proposes a novel one aiming at removing an inherent bias of studied transcriptomes related to their relative size. Comparisons of the normalization procedures are performed on real and simulated data sets. Real RNA-Seq data sets analyses, performed with all the different normalization methods, show that only 50% of significantly differentially expressed genes are common. This result highlights the influence of the normalization step on the differential expression analysis. Real and simulated data sets analyses give similar results showing 3 different groups of procedures having the same behavior. The group including the novel method named “Median Ratio Normalization” (MRN) gives the lower number of false discoveries. Within this group the MRN method is less sensitive to the modification of parameters related to the relative size of transcriptomes such as the number of down- and upregulated genes and the gene expression levels. The newly proposed MRN method efficiently deals with intrinsic bias resulting from relative size of studied transcriptomes. Validation with real and simulated data sets confirmed that MRN is more consistent and robust than existing methods. PMID:26442135
NASA Astrophysics Data System (ADS)
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.
De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc
2011-02-01
In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.
Long-range temporal correlations in the Kardar-Parisi-Zhang growth: numerical simulations
NASA Astrophysics Data System (ADS)
Song, Tianshu; Xia, Hui
2016-11-01
To analyze long-range temporal correlations in surface growth, we study numerically the (1 + 1)-dimensional Kardar-Parisi-Zhang (KPZ) equation driven by temporally correlated noise, and obtain the scaling exponents based on two different numerical methods. Our simulations show that the numerical results are in good agreement with the dynamic renormalization group (DRG) predictions, and are also consistent with the simulation results of the ballistic deposition (BD) model.
Integration of time as a factor in ergonomic simulation.
Walther, Mario; Muñoz, Begoña Toledo
2012-01-01
The paper describes the application of a simulation based ergonomic evaluation. Within a pilot project, the algorithms of the screening method of the European Assembly Worksheet were transferred into an existing digital human model. Movement data was recorded with an especially developed hybrid Motion Capturing system. A prototype of the system was built and is currently being tested at the Volkswagen Group. First results showed the feasibility of the simulation based ergonomic evaluation with Motion Capturing.
Seismic wavefield propagation in 2D anisotropic media: Ray theory versus wave-equation simulation
NASA Astrophysics Data System (ADS)
Bai, Chao-ying; Hu, Guang-yi; Zhang, Yan-teng; Li, Zhong-sheng
2014-05-01
Despite the ray theory that is based on the high frequency assumption of the elastic wave-equation, the ray theory and the wave-equation simulation methods should be mutually proof of each other and hence jointly developed, but in fact parallel independent progressively. For this reason, in this paper we try an alternative way to mutually verify and test the computational accuracy and the solution correctness of both the ray theory (the multistage irregular shortest-path method) and the wave-equation simulation method (both the staggered finite difference method and the pseudo-spectral method) in anisotropic VTI and TTI media. Through the analysis and comparison of wavefield snapshot, common source gather profile and synthetic seismogram, it is able not only to verify the accuracy and correctness of each of the methods at least for kinematic features, but also to thoroughly understand the kinematic and dynamic features of the wave propagation in anisotropic media. The results show that both the staggered finite difference method and the pseudo-spectral method are able to yield the same results even for complex anisotropic media (such as a fault model); the multistage irregular shortest-path method is capable of predicting similar kinematic features as the wave-equation simulation method does, which can be used to mutually test each other for methodology accuracy and solution correctness. In addition, with the aid of the ray tracing results, it is easy to identify the multi-phases (or multiples) in the wavefield snapshot, common source point gather seismic section and synthetic seismogram predicted by the wave-equation simulation method, which is a key issue for later seismic application.
Rubbi, Ivan; Ferri, Paola; Andreina, Giulia; Cremonini, Valeria
2016-01-01
Simulation in the context of the educational workshop is becoming an important learning method, as it allows to play realistic clinical-care situations. These vocational training activities promote the development of cognitive, affective and psychomotor skills in a pedagogical context safe and risk-free, but need to be accounted for using by valid and reliable instruments. To inspect the level of satisfaction of the students of a Degree in Nursing in northern Italy about static and high-fidelity exercises with simulators and clinical cases. A prospective observational study has been conducted involving a non-probabili- stic sample of 51 third-year students throughout the academic year 2013/14. The data collection instrument consists of three questionnaires Student Satisfaction and Self-confidence in Learning Scale, Educational Practices Questionnaire, Simulation Design Scale and 3 questions on overall satisfaction. Statistical analysis was performed with SPSS 20.0 and Office 2003 Excel. The response rate of 89.5% is obtained. The Cronbach Alfa showed a good internal reliability (α = .982). The students were generally satisfied with the activities carried out in the teaching laboratory, showing more enthusiasm for the simulation with static mannequins (71%) and with high-fidelity simulators (60%), activities for which they have experienced a significant involvement and active learning. The teaching with clinical cases scored a lesser degree of satisfaction (38%) and for this method it was found the largest number of elements of weakness.
Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method
Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao
2016-01-01
This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661
Profile Optimization Method for Robust Airfoil Shape Optimization in Viscous Flow
NASA Technical Reports Server (NTRS)
Li, Wu
2003-01-01
Simulation results obtained by using FUN2D for robust airfoil shape optimization in transonic viscous flow are included to show the potential of the profile optimization method for generating fairly smooth optimal airfoils with no off-design performance degradation.
Merritt, M.L.
1993-01-01
The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
Q-Method Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Zanetti, Renato; Ainscough, Thomas; Christian, John; Spanos, Pol D.
2012-01-01
A new algorithm is proposed that smoothly integrates non-linear estimation of the attitude quaternion using Davenport s q-method and estimation of non-attitude states through an extended Kalman filter. The new method is compared to a similar existing algorithm showing its similarities and differences. The validity of the proposed approach is confirmed through numerical simulations.
ERIC Educational Resources Information Center
Gilstrap, Donald L.
2013-01-01
In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…
Fischer, E A J; De Vlas, S J; Richardus, J H; Habbema, J D F
2008-09-01
Microsimulation of infectious diseases requires simulation of many life histories of interacting individuals. In particular, relatively rare infections such as leprosy need to be studied in very large populations. Computation time increases disproportionally with the size of the simulated population. We present a novel method, MUSIDH, an acronym for multiple use of simulated demographic histories, to reduce computation time. Demographic history refers to the processes of birth, death and all other demographic events that should be unrelated to the natural course of an infection, thus non-fatal infections. MUSIDH attaches a fixed number of infection histories to each demographic history, and these infection histories interact as if being the infection history of separate individuals. With two examples, mumps and leprosy, we show that the method can give a factor 50 reduction in computation time at the cost of a small loss in precision. The largest reductions are obtained for rare infections with complex demographic histories.
Vallance, Aaron K.; Hemani, Ashish; Fernandez, Victoria; Livingstone, Daniel; McCusker, Kerri; Toro-Troconis, Maria
2014-01-01
Aims and method To develop and evaluate a novel teaching session on clinical assessment using role play simulation. Teaching and research sessions occurred sequentially in computer laboratories. Ten medical students were divided into two online small-group teaching sessions. Students role-played as clinician avatars and the teacher played a suicidal adolescent avatar. Questionnaire and focus-group methodology evaluated participants’ attitudes to the learning experience. Quantitative data were analysed using SPSS, qualitative data through nominal-group and thematic analyses. Results Participants reported improvements in psychiatric skills/knowledge, expressing less anxiety and more enjoyment than role-playing face to face. Data demonstrated a positive relationship between simulator fidelity and perceived utility. Some participants expressed concern about added value over other learning methods and non-verbal communication. Clinical implications The study shows that virtual worlds can successfully host role play simulation, valued by students as a useful learning method. The potential for distance learning would allow delivery irrespective of geographical distance and boundaries. PMID:25285217
A Computer Simulation of Community Pharmacy Practice for Educational Use.
Bindoff, Ivan; Ling, Tristan; Bereznicki, Luke; Westbury, Juanita; Chalmers, Leanne; Peterson, Gregory; Ollington, Robert
2014-11-15
To provide a computer-based learning method for pharmacy practice that is as effective as paper-based scenarios, but more engaging and less labor-intensive. We developed a flexible and customizable computer simulation of community pharmacy. Using it, the students would be able to work through scenarios which encapsulate the entirety of a patient presentation. We compared the traditional paper-based teaching method to our computer-based approach using equivalent scenarios. The paper-based group had 2 tutors while the computer group had none. Both groups were given a prescenario and postscenario clinical knowledge quiz and survey. Students in the computer-based group had generally greater improvements in their clinical knowledge score, and third-year students using the computer-based method also showed more improvements in history taking and counseling competencies. Third-year students also found the simulation fun and engaging. Our simulation of community pharmacy provided an educational experience as effective as the paper-based alternative, despite the lack of a human tutor.
Cros, David; Sánchez, Leopoldo; Cochard, Benoit; Samper, Patrick; Denis, Marie; Bouvet, Jean-Marc; Fernández, Jesús
2014-04-01
Explicit pedigree reconstruction by simulated annealing gave reliable estimates of genealogical coancestry in plant species, especially when selfing rate was lower than 0.6, using a realistic number of markers. Genealogical coancestry information is crucial in plant breeding to estimate genetic parameters and breeding values. The approach of Fernández and Toro (Mol Ecol 15:1657-1667, 2006) to estimate genealogical coancestries from molecular data through pedigree reconstruction was limited to species with separate sexes. In this study it was extended to plants, allowing hermaphroditism and monoecy, with possible selfing. Moreover, some improvements were made to take previous knowledge on the population demographic history into account. The new method was validated using simulated and real datasets. Simulations showed that accuracy of estimates was high with 30 microsatellites, with the best results obtained for selfing rates below 0.6. In these conditions, the root mean square error (RMSE) between the true and estimated genealogical coancestry was small (<0.07), although the number of ancestors was overestimated and the selfing rate could be biased. Simulations also showed that linkage disequilibrium between markers and departure from the Hardy-Weinberg equilibrium in the founder population did not affect the efficiency of the method. Real oil palm data confirmed the simulation results, with a high correlation between the true and estimated genealogical coancestry (>0.9) and a low RMSE (<0.08) using 38 markers. The method was applied to the Deli oil palm population for which pedigree data were scarce. The estimated genealogical coancestries were highly correlated (>0.9) with the molecular coancestries using 100 markers. Reconstructed pedigrees were used to estimate effective population sizes. In conclusion, this method gave reliable genealogical coancestry estimates. The strategy was implemented in the software MOLCOANC 3.0.
Using multi-criteria analysis of simulation models to understand complex biological systems
Maureen C. Kennedy; E. David Ford
2011-01-01
Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...
ERIC Educational Resources Information Center
William, Abeer; Vidal, Victoria L.; John, Pamela
2016-01-01
This quasi-experimental study compared differences in phlebotomy performance on a live client, between a control group taught through the traditional method and an experimental group using virtual reality simulation. The study showed both groups had performed successfully, using the following metrics: number of reinsertions, pain factor, hematoma…
Novel inter-crystal scattering event identification method for PET detectors
NASA Astrophysics Data System (ADS)
Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung
2018-06-01
Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8 × 8 photo sensor was coupled to 8 × 8, 10 × 10 and 12 × 12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8 × 8 and 10 × 10 arrays of 3 × 3 × 20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.
Experiments and FEM simulations of fracture behaviors for ADC12 aluminum alloy under impact load
NASA Astrophysics Data System (ADS)
Hu, Yumei; Xiao, Yue; Jin, Xiaoqing; Zheng, Haoran; Zhou, Yinge; Shao, Jinhua
2016-11-01
Using the combination of experiment and simulation, the fracture behavior of the brittle metal named ADC12 aluminum alloy was studied. Five typical experiments were carried out on this material, with responding data collected on different stress states and dynamic strain rates. Fractographs revealed that the morphologies of fractured specimen under several rates showed different results, indicating that the fracture was predominantly a brittle one in nature. Simulations of the fracture processes of those specimens were conducted by Finite Element Method, whilst consistency was observed between simulations and experiments. In simulation, the Johnson- Cook model was chosen to describe the damage development and to predict the failure using parameters determined from those experimental data. Subsequently, an ADC12 engine mount bracket crashing simulation was conducted and the results indicated good agreement with the experiments. The accordance showed that our research can provide an accurate description for the deforming and fracture processes of the studied alloy.
NASA Astrophysics Data System (ADS)
Li, Xin; Song, Weiying; Yang, Kai; Krishnan, N. M. Anoop; Wang, Bu; Smedskjaer, Morten M.; Mauro, John C.; Sant, Gaurav; Balonis, Magdalena; Bauchy, Mathieu
2017-08-01
Although molecular dynamics (MD) simulations are commonly used to predict the structure and properties of glasses, they are intrinsically limited to short time scales, necessitating the use of fast cooling rates. It is therefore challenging to compare results from MD simulations to experimental results for glasses cooled on typical laboratory time scales. Based on MD simulations of a sodium silicate glass with varying cooling rate (from 0.01 to 100 K/ps), here we show that thermal history primarily affects the medium-range order structure, while the short-range order is largely unaffected over the range of cooling rates simulated. This results in a decoupling between the enthalpy and volume relaxation functions, where the enthalpy quickly plateaus as the cooling rate decreases, whereas density exhibits a slower relaxation. Finally, we show that, using the proper extrapolation method, the outcomes of MD simulations can be meaningfully compared to experimental values when extrapolated to slower cooling rates.
Verification technology of remote sensing camera satellite imaging simulation based on ray tracing
NASA Astrophysics Data System (ADS)
Gu, Qiongqiong; Chen, Xiaomei; Yang, Deyun
2017-08-01
Remote sensing satellite camera imaging simulation technology is broadly used to evaluate the satellite imaging quality and to test the data application system. But the simulation precision is hard to examine. In this paper, we propose an experimental simulation verification method, which is based on the test parameter variation comparison. According to the simulation model based on ray-tracing, the experiment is to verify the model precision by changing the types of devices, which are corresponding the parameters of the model. The experimental results show that the similarity between the imaging model based on ray tracing and the experimental image is 91.4%, which can simulate the remote sensing satellite imaging system very well.
Semi-physical Simulation Platform of a Parafoil Nonlinear Dynamic System
NASA Astrophysics Data System (ADS)
Gao, Hai-Tao; Yang, Sheng-Bo; Zhu, Er-Lin; Sun, Qing-Lin; Chen, Zeng-Qiang; Kang, Xiao-Feng
2013-11-01
Focusing on the problems in the process of simulation and experiment on a parafoil nonlinear dynamic system, such as limited methods, high cost and low efficiency we present a semi-physical simulation platform. It is designed by connecting parts of physical objects to a computer, and remedies the defect that a computer simulation is divorced from a real environment absolutely. The main components of the platform and its functions, as well as simulation flows, are introduced. The feasibility and validity are verified through a simulation experiment. The experimental results show that the platform has significance for improving the quality of the parafoil fixed-point airdrop system, shortening the development cycle and saving cost.
Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes
NASA Astrophysics Data System (ADS)
Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten
2017-11-01
Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.
A Force Balanced Fragmentation Method for ab Initio Molecular Dynamic Simulation of Protein.
Xu, Mingyuan; Zhu, Tong; Zhang, John Z H
2018-01-01
A force balanced generalized molecular fractionation with conjugate caps (FB-GMFCC) method is proposed for ab initio molecular dynamic simulation of proteins. In this approach, the energy of the protein is computed by a linear combination of the QM energies of individual residues and molecular fragments that account for the two-body interaction of hydrogen bond between backbone peptides. The atomic forces on the caped H atoms were corrected to conserve the total force of the protein. Using this approach, ab initio molecular dynamic simulation of an Ace-(ALA) 9 -NME linear peptide showed the conservation of the total energy of the system throughout the simulation. Further a more robust 110 ps ab initio molecular dynamic simulation was performed for a protein with 56 residues and 862 atoms in explicit water. Compared with the classical force field, the ab initio molecular dynamic simulations gave better description of the geometry of peptide bonds. Although further development is still needed, the current approach is highly efficient, trivially parallel, and can be applied to ab initio molecular dynamic simulation study of large proteins.
Mohammed, Yassene; Verhey, Janko F
2005-01-01
Background Laser Interstitial ThermoTherapy (LITT) is a well established surgical method. The use of LITT is so far limited to homogeneous tissues, e.g. the liver. One of the reasons is the limited capability of existing treatment planning models to calculate accurately the damage zone. The treatment planning in inhomogeneous tissues, especially of regions near main vessels, poses still a challenge. In order to extend the application of LITT to a wider range of anatomical regions new simulation methods are needed. The model described with this article enables efficient simulation for predicting damaged tissue as a basis for a future laser-surgical planning system. Previously we described the dependency of the model on geometry. With the presented paper including two video files we focus on the methodological, physical and mathematical background of the model. Methods In contrast to previous simulation attempts, our model is based on finite element method (FEM). We propose the use of LITT, in sensitive areas such as the neck region to treat tumours in lymph node with dimensions of 0.5 cm – 2 cm in diameter near the carotid artery. Our model is based on calculations describing the light distribution using the diffusion approximation of the transport theory; the temperature rise using the bioheat equation, including the effect of microperfusion in tissue to determine the extent of thermal damage; and the dependency of thermal and optical properties on the temperature and the injury. Injury is estimated using a damage integral. To check our model we performed a first in vitro experiment on porcine muscle tissue. Results We performed the derivation of the geometry from 3D ultrasound data and show for this proposed geometry the energy distribution, the heat elevation, and the damage zone. Further on, we perform a comparison with the in-vitro experiment. The calculation shows an error of 5% in the x-axis parallel to the blood vessel. Conclusions The FEM technique proposed can overcome limitations of other methods and enables an efficient simulation for predicting the damage zone induced using LITT. Our calculations show clearly that major vessels would not be damaged. The area/volume of the damaged zone calculated from both simulation and in-vitro experiment fits well and the deviation is small. One of the main reasons for the deviation is the lack of accurate values of the tissue optical properties. In further experiments this needs to be validated. PMID:15631630
Adaptive control of servo system based on LuGre model
NASA Astrophysics Data System (ADS)
Jin, Wang; Niancong, Liu; Jianlong, Chen; Weitao, Geng
2018-03-01
This paper established a mechanical model of feed system based on LuGre model. In order to solve the influence of nonlinear factors on the system running stability, a nonlinear single observer is designed to estimate the parameter z in the LuGre model and an adaptive friction compensation controller is designed. Simulink simulation results show that the control method can effectively suppress the adverse effects of friction and external disturbances. The simulation show that the adaptive parameter kz is between 0.11-0.13, and the value of gamma1 is between 1.9-2.1. Position tracking error reaches level 10-3 and is stabilized near 0 values within 0.3 seconds, the compensation method has better tracking accuracy and robustness.
FIND: difFerential chromatin INteractions Detection using a spatial Poisson process
Chen, Yang; Zhang, Michael Q.
2018-01-01
Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. PMID:29440282
Adaptive nonlinear control for autonomous ground vehicles
NASA Astrophysics Data System (ADS)
Black, William S.
We present the background and motivation for ground vehicle autonomy, and focus on uses for space-exploration. Using a simple design example of an autonomous ground vehicle we derive the equations of motion. After providing the mathematical background for nonlinear systems and control we present two common methods for exactly linearizing nonlinear systems, feedback linearization and backstepping. We use these in combination with three adaptive control methods: model reference adaptive control, adaptive sliding mode control, and extremum-seeking model reference adaptive control. We show the performances of each combination through several simulation results. We then consider disturbances in the system, and design nonlinear disturbance observers for both single-input-single-output and multi-input-multi-output systems. Finally, we show the performance of these observers with simulation results.
An experimental method for the assessment of color simulation tools.
Lillo, Julio; Alvaro, Leticia; Moreira, Humberto
2014-07-22
The Simulcheck method for evaluating the accuracy of color simulation tools in relation to dichromats is described and used to test three color simulation tools: Variantor, Coblis, and Vischeck. A total of 10 dichromats (five protanopes, five deuteranopes) and 10 normal trichromats participated in the current study. Simulcheck includes two psychophysical tasks: the Pseudoachromatic Stimuli Identification task and the Minimum Achromatic Contrast task. The Pseudoachromatic Stimuli Identification task allows determination of the two chromatic angles (h(uv) values) that generate a minimum response in the yellow–blue opponent mechanism and, consequently, pseudoachromatic stimuli (greens or reds). The Minimum Achromatic Contrast task requires the selection of the gray background that produces minimum contrast (near zero change in the achromatic mechanism) for each pseudoachromatic stimulus selected in the previous task (L(R) values). Results showed important differences in the colorimetric transformations performed by the three evaluated simulation tools and their accuracy levels. Vischeck simulation accurately implemented the algorithm of Brettel, Viénot, and Mollon (1997). Only Vischeck appeared accurate (similarity in huv and L(R) values between real and simulated dichromats) and, consequently, could render reliable color selections. It is concluded that Simulcheck is a consistent method because it provided an equivalent pattern of results for huv and L(R) values irrespective of the stimulus set used to evaluate a simulation tool. Simulcheck was also considered valid because real dichromats provided expected huv and LR values when performing the two psychophysical tasks included in this method. © 2014 ARVO.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Removal of BCG artefact from concurrent fMRI-EEG recordings based on EMD and PCA.
Javed, Ehtasham; Faye, Ibrahima; Malik, Aamir Saeed; Abdullah, Jafri Malin
2017-11-01
Simultaneous electroencephalography (EEG) and functional magnetic resonance image (fMRI) acquisitions provide better insight into brain dynamics. Some artefacts due to simultaneous acquisition pose a threat to the quality of the data. One such problematic artefact is the ballistocardiogram (BCG) artefact. We developed a hybrid algorithm that combines features of empirical mode decomposition (EMD) with principal component analysis (PCA) to reduce the BCG artefact. The algorithm does not require extra electrocardiogram (ECG) or electrooculogram (EOG) recordings to extract the BCG artefact. The method was tested with both simulated and real EEG data of 11 participants. From the simulated data, the similarity index between the extracted BCG and the simulated BCG showed the effectiveness of the proposed method in BCG removal. On the other hand, real data were recorded with two conditions, i.e. resting state (eyes closed dataset) and task influenced (event-related potentials (ERPs) dataset). Using qualitative (visual inspection) and quantitative (similarity index, improved normalized power spectrum (INPS) ratio, power spectrum, sample entropy (SE)) evaluation parameters, the assessment results showed that the proposed method can efficiently reduce the BCG artefact while preserving the neuronal signals. Compared with conventional methods, namely, average artefact subtraction (AAS), optimal basis set (OBS) and combined independent component analysis and principal component analysis (ICA-PCA), the statistical analyses of the results showed that the proposed method has better performance, and the differences were significant for all quantitative parameters except for the power and sample entropy. The proposed method does not require any reference signal, prior information or assumption to extract the BCG artefact. It will be very useful in circumstances where the reference signal is not available. Copyright © 2017 Elsevier B.V. All rights reserved.
Finite-element lattice Boltzmann simulations of contact line dynamics
NASA Astrophysics Data System (ADS)
Matin, Rastin; Krzysztof Misztal, Marek; Hernández-García, Anier; Mathiesen, Joachim
2018-01-01
The lattice Boltzmann method has become one of the standard techniques for simulating a wide range of fluid flows. However, the intrinsic coupling of momentum and space discretization restricts the traditional lattice Boltzmann method to regular lattices. Alternative off-lattice Boltzmann schemes exist for both single- and multiphase flows that decouple the velocity discretization from the underlying spatial grid. The current study extends the applicability of these off-lattice methods by introducing a finite element formulation that enables simulating contact line dynamics for partially wetting fluids. This work exemplifies the implementation of the scheme and furthermore presents benchmark experiments that show the scheme reduces spurious currents at the liquid-vapor interface by at least two orders of magnitude compared to a nodal implementation and allows for predicting the equilibrium states accurately in the range of moderate contact angles.
Shi, Xianbo; Reininger, Ruben; Sanchez del Rio, Manuel; ...
2014-05-15
A new method for beamline simulation combining ray-tracing and wavefront propagation is described. The 'Hybrid Method' computes diffraction effects when the beam is clipped by an aperture or mirror length and can also simulate the effect of figure errors in the optical elements when diffraction is present. The effect of different spatial frequencies of figure errors on the image is compared withSHADOWresults pointing to the limitations of the latter. The code has been benchmarked against the multi-electron version ofSRWin one dimension to show its validity in the case of fully, partially and non-coherent beams. The results demonstrate that the codemore » is considerably faster than the multi-electron version ofSRWand is therefore a useful tool for beamline design and optimization.« less
NASA Astrophysics Data System (ADS)
Drumond Vieira, Rodrigo; da Rocha Bernardo, José Roberto; Evagorou, Maria; Florentino de Melo, Viviane
2015-05-01
In this article, we focus on the contributions that a simulated jury-based activity might have for pre-service teachers, especially for their active participation and learning in teacher education. We observed a teacher educator using a series of simulated juries as teaching resources to help pre-service teachers develop their pedagogical knowledge and their argumentation abilities in a physics teacher methods course. For the purposes of this article, we have selected one simulated jury-based activity, comprising two opposed groups of pre-service teachers that presented aspects that hinder the teachers' development of professional knowledge (against group) and aspects that allow this development (favor group). After the groups' presentations, a group of judges was formed to evaluate the discussion. We applied a multi-level method for discourse analysis and the results showed that (1) the simulated jury afforded the pre-service teachers to position themselves as active knowledge producers; (2) the teacher acted as 'animator' of the pre-service teachers' actions, showing responsiveness to the emergence of circumstantial teaching and learning opportunities and (3) the simulated jury culminated in the judges' identification of the pattern 'concrete/obstacles-ideological/possibilities' in the groups' responses, which was elaborated by the teacher for the whole class. Implications from this study include using simulated juries for teaching and learning and for the development of the pre-service teachers' argumentative abilities. The potential of simulated juries to improve teaching and learning needs to be further explored in order to inform the uses and reflections of this resource in science education.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge
Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.
2016-01-01
Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749
Numerical Simulation of Tethered Underwater Kites for Power Generation
NASA Astrophysics Data System (ADS)
Ghasemi, Amirmahdi; Olinger, David; Tryggvason, Gretar
2015-11-01
An emerging renewable energy technology, tethered undersea kites (TUSK), which is used to extract hydrokinetic energy from ocean and tidal currents, is studied. TUSK systems consist of a rigid-winged ``kite,'' or glider, moving in an ocean current which is connected by tethers to a floating buoy on the ocean surface. The TUSK kite is a current speed enhancement device since the kite can move in high-speed, cross-current motion at 4-6 times the current velocity, thus producing more power than conventional marine turbines. A computational simulation is developed to simulate the dynamic motion of an underwater kite and extendable tether. A two-step projection method within a finite volume formulation, along with an Open MP acceleration method, is employed to solve the Navier-Stokes equations. An immersed boundary method is incorporated to model the fluid-structure interaction of the rigid kite (with NACA 0012 airfoil shape in 2D and NACA 0021 airfoil shape in 3D simulations) and the fluid flow. PID control methods are used to adjust the kite angle of attack during power (tether reel-out) and retraction (reel-in) phases. Two baseline simulations (for kite motions in two and three dimensions) are studied, and system power output, flow field vorticity, tether tension, and hydrodynamic coefficients (lift and drag) for the kite are determined. The simulated power output shows good agreement with established theoretical results for a kite moving in two-dimensions.
He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong
2016-02-01
Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.
Historical droughts in Mediterranean regions during the last 500 years: a data/model approach
NASA Astrophysics Data System (ADS)
Brewer, S.; Alleaume, S.; Guiot, J.; Nicault, A.
2007-06-01
We present here a new method for comparing the output of General Circulation Models (GCMs) with proxy-based reconstructions, using time series of reconstructed and simulated climate parameters. The method uses k-means clustering to allow comparison between different periods that have similar spatial patterns, and a fuzzy logic-based distance measure in order to take reconstruction errors into account. The method has been used to test two coupled ocean-atmosphere GCMs over the Mediterranean region for the last 500 years, using an index of drought stress, the Palmer Drought Severity Index. The results showed that, whilst no model exactly simulated the reconstructed changes, all simulations were an improvement over using the mean climate, and a good match was found after 1650 with a model run that took into account changes in volcanic forcing, solar irradiance, and greenhouse gases. A more detailed investigation of the output of this model showed the existence of a set of atmospheric circulation patterns linked to the patterns of drought stress: 1) a blocking pattern over northern Europe linked to dry conditions in the south prior to the Little Ice Age (LIA) and during the 20th century; 2) a NAO-positive like pattern with increased westerlies during the LIA; 3) a NAO-negative like period shown in the model prior to the LIA, but that occurs most frequently in the data during the LIA. The results of the comparison show the improvement in simulated climate as various forcings are included and help to understand the atmospheric changes that are linked to the observed reconstructed climate changes.
Numerical simulation of gas distribution in goaf under Y ventilation mode
NASA Astrophysics Data System (ADS)
Li, Shengzhou; Liu, Jun
2018-04-01
Taking the Y type ventilation of the working face as the research object, diffusion equation is introduced to simulate the diffusion characteristics of gas, using Navier-Stokes equation and Brinkman equation to simulate the gas flow in working face and goaf, the physical model of gas flow in coal mining face was established. With numerical simulation software COMSOL multiphysics methods, gas distribution in goaf under Y ventilation mode is simulated and gas distribution of the working face, the upper corner and goaf is analysised. The results show that the Y type ventilation system can effectively improve the corner gas accumulation and overrun problem.
Multi-disciplinary coupling for integrated design of propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Singhal, S. N.
1993-01-01
Effective computational simulation procedures are described for modeling the inherent multi-disciplinary interactions for determining the true response of propulsion systems. Results are presented for propulsion system responses including multi-discipline coupling effects via (1) coupled multi-discipline tailoring, (2) an integrated system of multidisciplinary simulators, (3) coupled material-behavior/fabrication-process tailoring, (4) sensitivities using a probabilistic simulator, and (5) coupled materials/structures/fracture/probabilistic behavior simulator. The results show that the best designs can be determined if the analysis/tailoring methods account for the multi-disciplinary coupling effects. The coupling across disciplines can be used to develop an integrated interactive multi-discipline numerical propulsion system simulator.
Adaptive hybrid simulations for multiscale stochastic reaction networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less
Adaptive hybrid simulations for multiscale stochastic reaction networks.
Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa
2015-01-21
The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2016-10-01
Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Face-based smoothed finite element method for real-time simulation of soft tissue
NASA Astrophysics Data System (ADS)
Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane
2017-03-01
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.
Functional connectivity analysis in EEG source space: The choice of method
Knyazeva, Maria G.
2017-01-01
Functional connectivity (FC) is among the most informative features derived from EEG. However, the most straightforward sensor-space analysis of FC is unreliable owing to volume conductance effects. An alternative—source-space analysis of FC—is optimal for high- and mid-density EEG (hdEEG, mdEEG); however, it is questionable for widely used low-density EEG (ldEEG) because of inadequate surface sampling. Here, using simulations, we investigate the performance of the two source FC methods, the inverse-based source FC (ISFC) and the cortical partial coherence (CPC). To examine the effects of localization errors of the inverse method on the FC estimation, we simulated an oscillatory source with varying locations and SNRs. To compare the FC estimations by the two methods, we simulated two synchronized sources with varying between-source distance and SNR. The simulations were implemented for hdEEG, mdEEG, and ldEEG. We showed that the performance of both methods deteriorates for deep sources owing to their inaccurate localization and smoothing. The accuracy of both methods improves with the increasing between-source distance. The best ISFC performance was achieved using hd/mdEEG, while the best CPC performance was observed with ldEEG. In conclusion, with hdEEG, ISFC outperforms CPC and therefore should be the preferred method. In the studies based on ldEEG, the CPC is a method of choice. PMID:28727750
Extraterrestrial sound for planetaria: A pedagogical study.
Leighton, T G; Banda, N; Berges, B; Joseph, P F; White, P R
2016-08-01
The purpose of this project was to supply an acoustical simulation device to a local planetarium for use in live shows aimed at engaging and inspiring children in science and engineering. The device plays audio simulations of estimates of the sounds produced by natural phenomena to accompany audio-visual presentations and live shows about Venus, Mars, and Titan. Amongst the simulated noise are the sounds of thunder, wind, and cryo-volcanoes. The device can also modify the speech of the presenter (or audience member) in accordance with the underlying physics to reproduce those vocalizations as if they had been produced on the world under discussion. Given that no time series recordings exist of sounds from other worlds, these sounds had to be simulated. The goal was to ensure that the audio simulations were delivered in time for a planetarium's launch show to enable the requested outreach to children. The exercise has also allowed an explanation of the science and engineering behind the creation of the sounds. This has been achieved for young children, and also for older students and undergraduates, who could then debate the limitations of that method.
NASA Astrophysics Data System (ADS)
Gao, Xiatian; Wang, Xiaogang; Jiang, Binhao
2017-10-01
UPSF (Universal Plasma Simulation Framework) is a new plasma simulation code designed for maximum flexibility by using edge-cutting techniques supported by C++17 standard. Through use of metaprogramming technique, UPSF provides arbitrary dimensional data structures and methods to support various kinds of plasma simulation models, like, Vlasov, particle in cell (PIC), fluid, Fokker-Planck, and their variants and hybrid methods. Through C++ metaprogramming technique, a single code can be used to arbitrary dimensional systems with no loss of performance. UPSF can also automatically parallelize the distributed data structure and accelerate matrix and tensor operations by BLAS. A three-dimensional particle in cell code is developed based on UPSF. Two test cases, Landau damping and Weibel instability for electrostatic and electromagnetic situation respectively, are presented to show the validation and performance of the UPSF code.
Modeling and Simulation for Mission Operations Work System Design
NASA Technical Reports Server (NTRS)
Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.
2003-01-01
Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.; Harrison, D. E. Jr.
A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
Simulations of Scatterometry Down to 22 nm Structure Sizes and Beyond with Special Emphasis on LER
NASA Astrophysics Data System (ADS)
Osten, W.; Ferreras Paz, V.; Frenner, K.; Schuster, T.; Bloess, H.
2009-09-01
In recent years, scatterometry has become one of the most commonly used methods for CD metrology. With decreasing structure size for future technology nodes, the search for optimized scatterometry measurement configurations gets more important to exploit maximum sensitivity. As widespread industrial scatterometry tools mainly still use a pre-set measurement configuration, there are still free parameters to improve sensitivity. Our current work uses a simulation based approach to predict and optimize sensitivity of future technology nodes. Since line edge roughness is getting important for such small structures, these imperfections of the periodic continuation cannot be neglected. Using fourier methods like e.g. rigorous coupled wave approach (RCWA) for diffraction calculus, nonperiodic features are hard to reach. We show that in this field certain types of fieldstitching methods show nice numerical behaviour and lead to useful results.
NASA Astrophysics Data System (ADS)
Zolotorevskii, V. S.; Pozdnyakov, A. V.; Churyumov, A. Yu.
2012-11-01
A calculation-experimental study is carried out to improve the concept of searching for new alloying systems in order to develop new casting alloys using mathematical simulation methods in combination with thermodynamic calculations. The results show the high effectiveness of the applied methods. The real possibility of selecting the promising compositions with the required set of casting and mechanical properties is exemplified by alloys with thermally hardened Al-Cu and Al-Cu-Mg matrices, as well as poorly soluble additives that form eutectic components using mainly the calculation study methods and the minimum number of experiments.
Machine learning for autonomous crystal structure identification.
Reinhart, Wesley F; Long, Andrew W; Howard, Michael P; Ferguson, Andrew L; Panagiotopoulos, Athanassios Z
2017-07-21
We present a machine learning technique to discover and distinguish relevant ordered structures from molecular simulation snapshots or particle tracking data. Unlike other popular methods for structural identification, our technique requires no a priori description of the target structures. Instead, we use nonlinear manifold learning to infer structural relationships between particles according to the topology of their local environment. This graph-based approach yields unbiased structural information which allows us to quantify the crystalline character of particles near defects, grain boundaries, and interfaces. We demonstrate the method by classifying particles in a simulation of colloidal crystallization, and show that our method identifies structural features that are missed by standard techniques.
Kovalchuk, Sergey V; Funkner, Anastasia A; Metsker, Oleg G; Yakovlev, Aleksey N
2018-06-01
An approach to building a hybrid simulation of patient flow is introduced with a combination of data-driven methods for automation of model identification. The approach is described with a conceptual framework and basic methods for combination of different techniques. The implementation of the proposed approach for simulation of the acute coronary syndrome (ACS) was developed and used in an experimental study. A combination of data, text, process mining techniques, and machine learning approaches for the analysis of electronic health records (EHRs) with discrete-event simulation (DES) and queueing theory for the simulation of patient flow was proposed. The performed analysis of EHRs for ACS patients enabled identification of several classes of clinical pathways (CPs) which were used to implement a more realistic simulation of the patient flow. The developed solution was implemented using Python libraries (SimPy, SciPy, and others). The proposed approach enables more a realistic and detailed simulation of the patient flow within a group of related departments. An experimental study shows an improved simulation of patient length of stay for ACS patient flow obtained from EHRs in Almazov National Medical Research Centre in Saint Petersburg, Russia. The proposed approach, methods, and solutions provide a conceptual, methodological, and programming framework for the implementation of a simulation of complex and diverse scenarios within a flow of patients for different purposes: decision making, training, management optimization, and others. Copyright © 2018 Elsevier Inc. All rights reserved.
Morikami, Kenji; Itezono, Yoshiko; Nishimoto, Masahiro; Ohta, Masateru
2014-01-01
Compounds with a medium-sized flexible ring often show atropisomerism that is caused by the high-energy barriers between long-lived conformers that can be isolated and often have different biological properties to each other. In this study, the frequency of the transition between the two stable conformers, aS and aR, of thienotriazolodiazepine compounds with flexible 7-membered rings was estimated computationally by Monte Carlo (MC) simulations and validated experimentally by NMR experiments. To estimate the energy barriers for transitions as precisely as possible, the potential energy (PE) surfaces used in the MC simulations were calculated by molecular orbital (MO) methods. To accomplish the MC simulations with the MO-based PE surfaces in a practical central processing unit (CPU) time, the MO-based PE of each conformer was pre-calculated and stored before the MC simulations, and then only referred to during the MC simulations. The activation energies for transitions calculated by the MC simulations agreed well with the experimental ΔG determined by the NMR experiments. The analysis of the transition trajectories of the MC simulations revealed that the transition occurred not only through the transition states, but also through many different transition paths. Our computational methods gave us quantitative estimates of atropisomerism of the thienotriazolodiazepine compounds in a practical period of time, and the method could be applicable for other slow-dynamics phenomena that cannot be investigated by other atomistic simulations.
GPU-based Efficient Realistic Techniques for Bleeding and Smoke Generation in Surgical Simulators
Halic, Tansel; Sankaranarayanan, Ganesh; De, Suvranu
2010-01-01
Background In actual surgery, smoke and bleeding due to cautery processes, provide important visual cues to the surgeon which have been proposed as factors in surgical skill assessment. While several virtual reality (VR)-based surgical simulators have incorporated effects of bleeding and smoke generation, they are not realistic due to the requirement of real time performance. To be interactive, visual update must be performed at least 30 Hz and haptic (touch) information must be refreshed at 1 kHz. Simulation of smoke and bleeding is, therefore, either ignored or simulated using highly simplified techniques since other computationally intensive processes compete for the available CPU resources. Methods In this work, we develop a novel low-cost method to generate realistic bleeding and smoke in VR-based surgical simulators which outsources the computations to the graphical processing unit (GPU), thus freeing up the CPU for other time-critical tasks. This method is independent of the complexity of the organ models in the virtual environment. User studies were performed using 20 subjects to determine the visual quality of the simulations compared to real surgical videos. Results The smoke and bleeding simulation were implemented as part of a Laparoscopic Adjustable Gastric Banding (LAGB) simulator. For the bleeding simulation, the original implementation using the shader did not incur in noticeable overhead. However, for smoke generation, an I/O (Input/Output) bottleneck was observed and two different methods were developed to overcome this limitation. Based on our benchmark results, a buffered approach performed better than a pipelined approach and could support up to 15 video streams in real time. Human subject studies showed that the visual realism of the simulations were as good as in real surgery (median rating of 4 on a 5-point Likert scale). Conclusions Based on the performance results and subject study, both bleeding and smoke simulations were concluded to be efficient, highly realistic and well suited in VR-based surgical simulators. PMID:20878651
High-Order/Low-Order methods for ocean modeling
Newman, Christopher; Womeldorff, Geoff; Chacón, Luis; ...
2015-06-01
In this study, we examine a High Order/Low Order (HOLO) approach for a z-level ocean model and show that the traditional semi-implicit and split-explicit methods, as well as a recent preconditioning strategy, can easily be cast in the framework of HOLO methods. The HOLO formulation admits an implicit-explicit method that is algorithmically scalable and second-order accurate, allowing timesteps much larger than the barotropic time scale. We show how HOLO approaches, in particular the implicit-explicit method, can provide a solid route for ocean simulation to heterogeneous computing and exascale environments.
An, Yongkai; Lu, Wenxi; Cheng, Weiguo
2015-01-01
This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008
Xiao, Li; Luo, Ray
2017-12-07
We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.
NASA Astrophysics Data System (ADS)
Desgranges, Caroline; Delhommelle, Jerome
2016-11-01
Using the entropy S as a reaction coordinate, we determine the free energy barrier associated with the formation of a liquid droplet from a supersaturated vapor for atomic and molecular fluids. For this purpose, we develop the μ V T -S simulation method that combines the advantages of the grand-canonical ensemble, that allows for a direct evaluation of the entropy, and of the umbrella sampling method, that is well suited to the study of an activated process like nucleation. Applying this approach to an atomic system such as Ar allows us to test the method. The results show that the μ V T -S method gives the correct dependence on supersaturation of the height of the free energy barrier and of the size of the critical droplet, when compared to predictions from the classical nucleation theory and to previous simulation results. In addition, it provides insight into the relation between the entropy and droplet formation throughout this process. An additional advantage of the μ V T -S approach is its direct transferability to molecular systems, since it uses the entropy of the system as the reaction coordinate. Applications of the μ V T -S simulation method to N2 and CO2 are presented and discussed in this work, showing the versatility of the μ V T -S approach.
New method of processing heat treatment experiments with numerical simulation support
NASA Astrophysics Data System (ADS)
Kik, T.; Moravec, J.; Novakova, I.
2017-08-01
In this work, benefits of combining modern software for numerical simulations of welding processes with laboratory research was described. Proposed new method of processing heat treatment experiments leading to obtaining relevant input data for numerical simulations of heat treatment of large parts was presented. It is now possible, by using experiments on small tested samples, to simulate cooling conditions comparable with cooling of bigger parts. Results from this method of testing makes current boundary conditions during real cooling process more accurate, but also can be used for improvement of software databases and optimization of a computational models. The point is to precise the computation of temperature fields for large scale hardening parts based on new method of temperature dependence determination of the heat transfer coefficient into hardening media for the particular material, defined maximal thickness of processed part and cooling conditions. In the paper we will also present an example of the comparison standard and modified (according to newly suggested methodology) heat transfer coefficient data’s and theirs influence on the simulation results. It shows how even the small changes influence mainly on distribution of temperature, metallurgical phases, hardness and stresses distribution. By this experiment it is also possible to obtain not only input data and data enabling optimization of computational model but at the same time also verification data. The greatest advantage of described method is independence of used cooling media type.
Achieving Rigorous Accelerated Conformational Sampling in Explicit Solvent.
Doshi, Urmi; Hamelberg, Donald
2014-04-03
Molecular dynamics simulations can provide valuable atomistic insights into biomolecular function. However, the accuracy of molecular simulations on general-purpose computers depends on the time scale of the events of interest. Advanced simulation methods, such as accelerated molecular dynamics, have shown tremendous promise in sampling the conformational dynamics of biomolecules, where standard molecular dynamics simulations are nonergodic. Here we present a sampling method based on accelerated molecular dynamics in which rotatable dihedral angles and nonbonded interactions are boosted separately. This method (RaMD-db) is a different implementation of the dual-boost accelerated molecular dynamics, introduced earlier. The advantage is that this method speeds up sampling of the conformational space of biomolecules in explicit solvent, as the degrees of freedom most relevant for conformational transitions are accelerated. We tested RaMD-db on one of the most difficult sampling problems - protein folding. Starting from fully extended polypeptide chains, two fast folding α-helical proteins (Trpcage and the double mutant of C-terminal fragment of Villin headpiece) and a designed β-hairpin (Chignolin) were completely folded to their native structures in very short simulation time. Multiple folding/unfolding transitions could be observed in a single trajectory. Our results show that RaMD-db is a promisingly fast and efficient sampling method for conformational transitions in explicit solvent. RaMD-db thus opens new avenues for understanding biomolecular self-assembly and functional dynamics occurring on long time and length scales.
NASA Astrophysics Data System (ADS)
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Mohd Khairul Bazli Mohd, E-mail: mkbazli@yahoo.com; Yusof, Fadhilah, E-mail: fadhilahy@utm.my; Daud, Zalina Mohd, E-mail: zalina@ic.utm.my
Recently, many rainfall network design techniques have been developed, discussed and compared by many researchers. Present day hydrological studies require higher levels of accuracy from collected data. In numerous basins, the rain gauge stations are located without clear scientific understanding. In this study, an attempt is made to redesign rain gauge network for Johor, Malaysia in order to meet the required level of accuracy preset by rainfall data users. The existing network of 84 rain gauges in Johor is optimized and redesigned into a new locations by using rainfall, humidity, solar radiation, temperature and wind speed data collected during themore » monsoon season (November - February) of 1975 until 2008. This study used the combination of geostatistics method (variance-reduction method) and simulated annealing as the algorithm of optimization during the redesigned proses. The result shows that the new rain gauge location provides minimum value of estimated variance. This shows that the combination of geostatistics method (variance-reduction method) and simulated annealing is successful in the development of the new optimum rain gauge system.« less
Investigation for Molecular Attraction Impact Between Contacting Surfaces in Micro-Gears
NASA Astrophysics Data System (ADS)
Yang, Ping; Li, Xialong; Zhao, Yanfang; Yang, Haiying; Wang, Shuting; Yang, Jianming
2013-10-01
The aim of this research work is to provide a systematic method to perform molecular attraction impact between contacting surfaces in micro-gear train. This method is established by integrating involute profile analysis and molecular dynamics simulation. A mathematical computation of micro-gear involute is presented based on geometrical properties, Taylor expression and Hamaker assumption. In the meantime, Morse potential function and the cut-off radius are introduced with a molecular dynamics simulation. So a hybrid computational method for the Van Der Waals force between the contacting faces in micro-gear train is developed. An example is illustrated to show the performance of this method. The results show that the change of Van Der Waals force in micro-gear train has a nonlinear characteristic with parameters change such as the modulus of the gear and the tooth number of gear etc. The procedure implies a potential feasibility that we can control the Van Der Waals force by adjusting the manufacturing parameters for gear train design.
High performance computation of radiative transfer equation using the finite element method
NASA Astrophysics Data System (ADS)
Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.
2018-05-01
This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.
NASA Astrophysics Data System (ADS)
Langlois, A.; Royer, A.; Derksen, C.; Montpetit, B.; Dupont, F.; GoïTa, K.
2012-12-01
Satellite-passive microwave remote sensing has been extensively used to estimate snow water equivalent (SWE) in northern regions. Although passive microwave sensors operate independent of solar illumination and the lower frequencies are independent of atmospheric conditions, the coarse spatial resolution introduces uncertainties to SWE retrievals due to the surface heterogeneity within individual pixels. In this article, we investigate the coupling of a thermodynamic multilayered snow model with a passive microwave emission model. Results show that the snow model itself provides poor SWE simulations when compared to field measurements from two major field campaigns. Coupling the snow and microwave emission models with successive iterations to correct the influence of snow grain size and density significantly improves SWE simulations. This method was further validated using an additional independent data set, which also showed significant improvement using the two-step iteration method compared to standalone simulations with the snow model.
Numerically stable finite difference simulation for ultrasonic NDE in anisotropic composites
NASA Astrophysics Data System (ADS)
Leckey, Cara A. C.; Quintanilla, Francisco Hernando; Cole, Christina M.
2018-04-01
Simulation tools can enable optimized inspection of advanced materials and complex geometry structures. Recent work at NASA Langley is focused on the development of custom simulation tools for modeling ultrasonic wave behavior in composite materials. Prior work focused on the use of a standard staggered grid finite difference type of mathematical approach, by implementing a three-dimensional (3D) anisotropic Elastodynamic Finite Integration Technique (EFIT) code. However, observations showed that the anisotropic EFIT method displays numerically unstable behavior at the locations of stress-free boundaries for some cases of anisotropic materials. This paper gives examples of the numerical instabilities observed for EFIT and discusses the source of instability. As an alternative to EFIT, the 3D Lebedev Finite Difference (LFD) method has been implemented. The paper briefly describes the LFD approach and shows examples of stable behavior in the presence of stress-free boundaries for a monoclinic anisotropy case. The LFD results are also compared to experimental results and dispersion curves.
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
Lattice Boltzmann Method of Different BGA Orientations on I-Type Dispensing Method
Gan, Z. L.; Ishak, M. H. H.; Abdullah, M. Z.; Khor, Soon Fuat
2016-01-01
This paper studies the three dimensional (3D) simulation of fluid flows through the ball grid array (BGA) to replicate the real underfill encapsulation process. The effect of different solder bump arrangements of BGA on the flow front, pressure and velocity of the fluid is investigated. The flow front, pressure and velocity for different time intervals are determined and analyzed for potential problems relating to solder bump damage. The simulation results from Lattice Boltzmann Method (LBM) code will be validated with experimental findings as well as the conventional Finite Volume Method (FVM) code to ensure highly accurate simulation setup. Based on the findings, good agreement can be seen between LBM and FVM simulations as well as the experimental observations. It was shown that only LBM is capable of capturing the micro-voids formation. This study also shows an increasing trend in fluid filling time for BGA with perimeter, middle empty and full orientations. The perimeter orientation has a higher pressure fluid at the middle region of BGA surface compared to middle empty and full orientation. This research would shed new light for a highly accurate simulation of encapsulation process using LBM and help to further increase the reliability of the package produced. PMID:27454872
NASA Astrophysics Data System (ADS)
Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.
2016-11-01
Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.
Method for inserting noise in digital mammography to simulate reduction in radiation dose
NASA Astrophysics Data System (ADS)
Borges, Lucas R.; de Oliveira, Helder C. R.; Nunes, Polyana F.; Vieira, Marcelo A. C.
2015-03-01
The quality of clinical x-ray images is closely related to the radiation dose used in the imaging study. The general principle for selecting the radiation is ALARA ("as low as reasonably achievable"). The practical optimization, however, remains challenging. It is well known that reducing the radiation dose increases the quantum noise, which could compromise the image quality. In order to conduct studies about dose reduction in mammography, it would be necessary to acquire repeated clinical images, from the same patient, with different dose levels. However, such practice would be unethical due to radiation related risks. One solution is to simulate the effects of dose reduction in clinical images. This work proposes a new method, based on the Anscombe transformation, which simulates dose reduction in digital mammography by inserting quantum noise into clinical mammograms acquired with the standard radiation dose. Thus, it is possible to simulate different levels of radiation doses without exposing the patient to new levels of radiation. Results showed that the achieved quality of simulated images generated with our method is the same as when using other methods found in the literature, with the novelty of using the Anscombe transformation for converting signal-independent Gaussian noise into signal-dependent quantum noise.
NASA Astrophysics Data System (ADS)
Srinath, Srikar; Poyneer, Lisa A.; Rudy, Alexander R.; Ammons, S. M.
2014-08-01
The advent of expensive, large-aperture telescopes and complex adaptive optics (AO) systems has strengthened the need for detailed simulation of such systems from the top of the atmosphere to control algorithms. The credibility of any simulation is underpinned by the quality of the atmosphere model used for introducing phase variations into the incident photons. Hitherto, simulations which incorporate wind layers have relied upon phase screen generation methods that tax the computation and memory capacities of the platforms on which they run. This places limits on parameters of a simulation, such as exposure time or resolution, thus compromising its utility. As aperture sizes and fields of view increase the problem will only get worse. We present an autoregressive method for evolving atmospheric phase that is efficient in its use of computation resources and allows for variability in the power contained in frozen flow or stochastic components of the atmosphere. Users have the flexibility of generating atmosphere datacubes in advance of runs where memory constraints allow to save on computation time or of computing the phase at each time step for long exposure times. Preliminary tests of model atmospheres generated using this method show power spectral density and rms phase in accordance with established metrics for Kolmogorov models.
Methodical aspects of text testing in a driving simulator.
Sundin, A; Patten, C J D; Bergmark, M; Hedberg, A; Iraeus, I-M; Pettersson, I
2012-01-01
A test with 30 test persons was conducted in a driving simulator. The test was a concept exploration and comparison of existing user interaction technologies for text message handling with focus on traffic safety and experience (technology familiarity and learning effects). Focus was put on methodical aspects how to measure and how to analyze the data. Results show difficulties with the eye tracking system (calibration etc.) per se, and also include the subsequent raw data preparation. The physical setup in the car where found important for the test completion.
Reinventing atomic magnetic simulations with spin-orbit coupling
Perera, Meewanage Dilina N.; Eisenbach, Markus; Nicholson, Don M.; ...
2016-02-10
We propose a powerful extension to the combined molecular and spin dynamics method that fully captures the coupling between the atomic and spin subsystems via spin-orbit interactions. Moreover, the foundation of this method lies in the inclusion of the local magnetic anisotropies that arise as a consequence of the lattice symmetry breaking due to phonons or crystallographic defects. By using canonical simulations of bcc iron with the system coupled to a phonon heat bath, we show that our extension enables the previously unachievable angular momentum exchange between the atomic and spin degrees of freedom.
Automated Simulation Updates based on Flight Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Ward, David G.
2007-01-01
A statistically-based method for using flight data to update aerodynamic data tables used in flight simulators is explained and demonstrated. A simplified wind-tunnel aerodynamic database for the F/A-18 aircraft is used as a starting point. Flight data from the NASA F-18 High Alpha Research Vehicle (HARV) is then used to update the data tables so that the resulting aerodynamic model characterizes the aerodynamics of the F-18 HARV. Prediction cases are used to show the effectiveness of the automated method, which requires no ad hoc adjustments by the analyst.
Large-Eddy Simulation of Wind-Plant Aerodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M. J.; Lee, S.; Moriarty, P. J.
In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation, and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done large-eddy simulations of wind plants with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology formore » performing this type of simulation. We used the OpenFOAM CFD toolbox to create our solver. The simulated time-averaged power production of the turbines in the plant agrees well with field observations, except with the sixth turbine and beyond in each wind-aligned. The power produced by each of those turbines is overpredicted by 25-40%. A direct comparison between simulated and field data is difficult because we simulate one wind direction with a speed and turbulence intensity characteristic of Lillgrund, but the field observations were taken over a year of varying conditions. The simulation shows the significant 60-70% decrease in the performance of the turbines behind the front row in this plant that has a spacing of 4.3 rotor diameters in this direction. The overall plant efficiency is well predicted. This work shows the importance of using local grid refinement to simultaneously capture the meter-scale details of the turbine wake and the kilometer-scale turbulent atmospheric structures. Although this work illustrates the power of large-eddy simulation in producing a time-accurate solution, it required about one million processor-hours, showing the significant cost of large-eddy simulation.« less
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Detection of Nanosilver Agents in Antibacterial Textiles
NASA Astrophysics Data System (ADS)
Xu, Chengtao; Zhao, Jie; Wu, Jianjian; Nie, Jinmei; Cui, Chengmin; Xie, Weibin; Zhang, Yan
2018-01-01
The analytical techniques are needed to detect the nanosilver in textiles in direct contact with skin. In this paper, in order to discuss the extraction of nanosilver on the surface of textiles by human skin, we demonstrate the capability of constant temperature oscillation extraction method followed by Inductively Coupled Plasma Spectroscopy (ICP). The sweat and deionized water were selected as extraction solvent simulating the contact process of human skin with textiles. The SEM and TEM analysis shows the existence of nanosilver in the fabric and aqueous extract. ICP analysis shows accurately when analysing silver amounts in the range of 0.05∼1.2 mg/L with r2 values of 0.9997. The percent recoveries of all fabrics were all lower than 44 %.The results shows that the developed method of simulating of human sweat extraction was not very effective. So the nanosilver might not be transferred to human body effectively from the fabric.
How do rigid-lid assumption affect LES simulation results at high Reynolds flows?
NASA Astrophysics Data System (ADS)
Khosronejad, Ali; Farhadzadeh, Ali; SBU Collaboration
2017-11-01
This research is motivated by the work of Kara et al., JHE, 2015. They employed LES to model flow around a model of abutment at a Re number of 27,000. They showed that first-order turbulence characteristics obtained by rigid-lid (RL) assumption compares fairly well with those of level-set (LS) method. Concerning the second-order statistics, however, their simulation results showed a significant dependence on the method used to describe the free surface. This finding can have important implications for open channel flow modeling. The Reynolds number for typical open channel flows, however, could be much larger than that of Kara et al.'s test case. Herein, we replicate the reported study by augmenting the geometric and hydraulic scales to reach a Re number of one order of magnitude larger ( 200,000). The Virtual Flow Simulator (VFS-Geophysics) model in its LES mode is used to simulate the test case using both RL and LS methods. The computational results are validated using measured flow and free-surface data from our laboratory experiments. Our goal is to investigate the effects of RL assumption on both first-order and second order statistics at high Reynolds numbers that occur in natural waterways. Acknowledgment: Computational resources are provided by the Center of Excellence in Wireless & Information Technology (CEWIT) of Stony Brook University.
Pan, Jui-Wen; Tsai, Pei-Jung; Chang, Kao-Der; Chang, Yung-Yuan
2013-03-01
In this paper, we propose a method to analyze the light extraction efficiency (LEE) enhancement of a nanopatterned sapphire substrates (NPSS) light-emitting diode (LED) by comparing wave optics software with ray optics software. Finite-difference time-domain (FDTD) simulations represent the wave optics software and Light Tools (LTs) simulations represent the ray optics software. First, we find the trends of and an optimal solution for the LEE enhancement when the 2D-FDTD simulations are used to save on simulation time and computational memory. The rigorous coupled-wave analysis method is utilized to explain the trend we get from the 2D-FDTD algorithm. The optimal solution is then applied in 3D-FDTD and LTs simulations. The results are similar and the difference in LEE enhancement between the two simulations does not exceed 8.5% in the small LED chip area. More than 10(4) times computational memory is saved during the LTs simulation in comparison to the 3D-FDTD simulation. Moreover, LEE enhancement from the side of the LED can be obtained in the LTs simulation. An actual-size NPSS LED is simulated using the LTs. The results show a more than 307% improvement in the total LEE enhancement of the NPSS LED with the optimal solution compared to the conventional LED.
Fletcher, Jack M.; Stuebing, Karla K.; Barth, Amy E.; Miciak, Jeremy; Francis, David J.; Denton, Carolyn A.
2013-01-01
Purpose Agreement across methods for identifying students as inadequate responders or as learning disabled is often poor. We report (1) an empirical examination of final status (post-intervention benchmarks) and dual-discrepancy growth methods based on growth during the intervention and final status for assessing response to intervention; and (2) a statistical simulation of psychometric issues that may explain low agreement. Methods After a Tier 2 intervention, final status benchmark criteria were used to identify 104 inadequate and 85 adequate responders to intervention, with comparisons of agreement and coverage for these methods and a dual-discrepancy method. Factors affecting agreement were investigated using computer simulation to manipulate reliability, the intercorrelation between measures, cut points, normative samples, and sample size. Results Identification of inadequate responders based on individual measures showed that single measures tended not to identify many members of the pool of 104 inadequate responders. Poor to fair levels of agreement for identifying inadequate responders were apparent between pairs of measures In the simulation, comparisons across two simulated measures generated indices of agreement (kappa) that were generally low because of multiple psychometric issues inherent in any test. Conclusions Expecting excellent agreement between two correlated tests with even small amounts of unreliability may not be realistic. Assessing outcomes based on multiple measures, such as level of CBM performance and short norm-referenced assessments of fluency may improve the reliability of diagnostic decisions. PMID:25364090
A fast RCS accuracy assessment method for passive radar calibrators
NASA Astrophysics Data System (ADS)
Zhou, Yongsheng; Li, Chuanrong; Tang, Lingli; Ma, Lingling; Liu, QI
2016-10-01
In microwave radar radiometric calibration, the corner reflector acts as the standard reference target but its structure is usually deformed during the transportation and installation, or deformed by wind and gravity while permanently installed outdoor, which will decrease the RCS accuracy and therefore the radiometric calibration accuracy. A fast RCS accuracy measurement method based on 3-D measuring instrument and RCS simulation was proposed in this paper for tracking the characteristic variation of the corner reflector. In the first step, RCS simulation algorithm was selected and its simulation accuracy was assessed. In the second step, the 3-D measuring instrument was selected and its measuring accuracy was evaluated. Once the accuracy of the selected RCS simulation algorithm and 3-D measuring instrument was satisfied for the RCS accuracy assessment, the 3-D structure of the corner reflector would be obtained by the 3-D measuring instrument, and then the RCSs of the obtained 3-D structure and corresponding ideal structure would be calculated respectively based on the selected RCS simulation algorithm. The final RCS accuracy was the absolute difference of the two RCS calculation results. The advantage of the proposed method was that it could be applied outdoor easily, avoiding the correlation among the plate edge length error, plate orthogonality error, plate curvature error. The accuracy of this method is higher than the method using distortion equation. In the end of the paper, a measurement example was presented in order to show the performance of the proposed method.
Image reconstruction through thin scattering media by simulated annealing algorithm
NASA Astrophysics Data System (ADS)
Fang, Longjie; Zuo, Haoyi; Pang, Lin; Yang, Zuogang; Zhang, Xicheng; Zhu, Jianhua
2018-07-01
An idea for reconstructing the image of an object behind thin scattering media is proposed by phase modulation. The optimized phase mask is achieved by modulating the scattered light using simulated annealing algorithm. The correlation coefficient is exploited as a fitness function to evaluate the quality of reconstructed image. The reconstructed images optimized from simulated annealing algorithm and genetic algorithm are compared in detail. The experimental results show that our proposed method has better definition and higher speed than genetic algorithm.
Ohira, Yoshiyuki; Uehara, Takanori; Noda, Kazutaka; Suzuki, Shingo; Shikino, Kiyoshi; Kajiwara, Hideki; Kondo, Takeshi; Hirota, Yusuke; Ikusaka, Masatomi
2017-01-01
Objectives We examined whether problem-based learning tutorials using patient-simulated videos showing daily life are more practical for clinical learning, compared with traditional paper-based problem-based learning, for the consideration rate of psychosocial issues and the recall rate for experienced learning. Methods Twenty-two groups with 120 fifth-year students were each assigned paper-based problem-based learning and video-based problem-based learning using patient-simulated videos. We compared target achievement rates in questionnaires using the Wilcoxon signed-rank test and discussion contents diversity using the Mann-Whitney U test. A follow-up survey used a chi-square test to measure students’ recall of cases in three categories: video, paper, and non-experienced. Results Video-based problem-based learning displayed significantly higher achievement rates for imagining authentic patients (p=0.001), incorporating a comprehensive approach including psychosocial aspects (p<0.001), and satisfaction with sessions (p=0.001). No significant differences existed in the discussion contents diversity regarding the International Classification of Primary Care Second Edition codes and chapter types or in the rate of psychological codes. In a follow-up survey comparing video and paper groups to non-experienced groups, the rates were higher for video (χ2=24.319, p<0.001) and paper (χ2=11.134, p=0.001). Although the video rate tended to be higher than the paper rate, no significant difference was found between the two. Conclusions Patient-simulated videos showing daily life facilitate imagining true patients and support a comprehensive approach that fosters better memory. The clinical patient-simulated video method is more practical and clinical problem-based tutorials can be implemented if we create patient-simulated videos for each symptom as teaching materials. PMID:28245193
Synthetic Seismograms of Explosive Sources Calculated by the Earth Simulator
NASA Astrophysics Data System (ADS)
Tsuboi, S.; Matsumoto, H.; Rozhkov, M.; Stachnik, J.
2017-12-01
We calculate broadband synthetic seismograms using the spectral-element method (Komatitsch & Tromp, 2001) for recent explosive events in northern Korean peninsula. We use supercomputer Earth Simulator system in JAMSTEC to compute synthetic seismograms using the spectral-element method. The simulations are performed on 8,100 processors, which require 2,025 nodes of the Earth Simulator. We use one chunk with the angular distance 40 degrees to compute synthetic seismograms. On this number of nodes, a simulation of 5 minutes of wave propagation accurate at periods of 1.5 seconds and longer requires about 10 hours of CPU time. We use CMT solution of Rozhkov et al (2016) as a source model for this event. One example of CMT solution for this source model has 28% double couple component and 51% isotropic component. The hypocenter depth of this solution is 1.4 km. Comparisons of the synthetic waveforms with the observation show that the arrival time of Pn and Pg waves matches well with the observation. Comparison also shows that the agreement of amplitude of other phases is not necessarily well, which demonstrates that the crustal structure should be improved to include in the simulation. The surface waves observed are also modeled well in the synthetics, which shows that the CMT solution we have used for this computation correctly grasps the source characteristics of this event. Because of characteristics of artificial explosive sources of which hypocenter location is already known, we may evaluate crustal structure along the propagation path from the waveform modeling for these sources. We may discuss the limitation of one dimensional crustal structure model by comparing the synthetic waveform of 3D crustal structure and the observed seismograms.
2014-01-01
Background The DerSimonian and Laird approach (DL) is widely used for random effects meta-analysis, but this often results in inappropriate type I error rates. The method described by Hartung, Knapp, Sidik and Jonkman (HKSJ) is known to perform better when trials of similar size are combined. However evidence in realistic situations, where one trial might be much larger than the other trials, is lacking. We aimed to evaluate the relative performance of the DL and HKSJ methods when studies of different sizes are combined and to develop a simple method to convert DL results to HKSJ results. Methods We evaluated the performance of the HKSJ versus DL approach in simulated meta-analyses of 2–20 trials with varying sample sizes and between-study heterogeneity, and allowing trials to have various sizes, e.g. 25% of the trials being 10-times larger than the smaller trials. We also compared the number of “positive” (statistically significant at p < 0.05) findings using empirical data of recent meta-analyses with > = 3 studies of interventions from the Cochrane Database of Systematic Reviews. Results The simulations showed that the HKSJ method consistently resulted in more adequate error rates than the DL method. When the significance level was 5%, the HKSJ error rates at most doubled, whereas for DL they could be over 30%. DL, and, far less so, HKSJ had more inflated error rates when the combined studies had unequal sizes and between-study heterogeneity. The empirical data from 689 meta-analyses showed that 25.1% of the significant findings for the DL method were non-significant with the HKSJ method. DL results can be easily converted into HKSJ results. Conclusions Our simulations showed that the HKSJ method consistently results in more adequate error rates than the DL method, especially when the number of studies is small, and can easily be applied routinely in meta-analyses. Even with the HKSJ method, extra caution is needed when there are = <5 studies of very unequal sizes. PMID:24548571
NASA Astrophysics Data System (ADS)
Endichi, A.; Zaari, H.; Benyoussef, A.; El Kenz, A.
2018-06-01
The magnetic behavior of LaCr2Si2C compound is investigated in this work, using first principle methods, Monte Carlo simulation (MCS) and mean field approximation (MFA). The structural, electronic and magnetic properties are described using ab initio method in the framework of the Generalized Gradient Approximation (GGA), and the Full Potential-Linearized Augmented Plane Wave (FP-LAPW) method implemented in the WIEN2K packages. We have also computed the coupling terms between magnetic atoms which are used in Hamiltonian model. A theoretical study realized by mean field approximation and Monte Carlo Simulation within the Ising model is used to more understand the magnetic properties of this compound. Thereby, our results showed a ferromagnetic ordering of the Cr magnetic moments below the Curie temperature of 30 K (Tc < 30 K) in LaCr2Si2C. Other parameters are also computed as: the magnetization, the energy, the specific heat and the susceptibility. This material shows the small sign of supra-conductivity; and future researches could be focused to enhance the transport and magnetic properties of this system.
Infrared Extinction Performance of Randomly Oriented Microbial-Clustered Agglomerate Materials.
Li, Le; Hu, Yihua; Gu, Youlin; Zhao, Xinying; Xu, Shilong; Yu, Lei; Zheng, Zhi Ming; Wang, Peng
2017-11-01
In this study, the spatial structure of randomly distributed clusters of fungi An0429 spores was simulated using a cluster aggregation (CCA) model, and the single scattering parameters of fungi An0429 spores were calculated using the discrete dipole approximation (DDA) method. The transmittance of 10.6 µm infrared (IR) light in the aggregated fungi An0429 spores swarm is simulated by using the Monte Carlo method. Several parameters that affect the transmittance of 10.6 µm IR light, such as the number and radius of original fungi An0429 spores, porosity of aggregated fungi An0429 spores, and density of aggregated fungi An0429 spores of the formation aerosol area were discussed. Finally, the transmittances of microbial materials with different qualities were measured in the dynamic test platform. The simulation results showed that the parameters analyzed were closely connected with the extinction performance of fungi An0429 spores. By controlling the value of the influencing factors, the transmittance could be lower than a certain threshold to meet the requirement of attenuation in application. In addition, the experimental results showed that the Monte Carlo method could well reflect the attenuation law of IR light in fungi An0429 spore agglomerates swarms.
Measurement and simulation of thermal neutron flux distribution in the RTP core
NASA Astrophysics Data System (ADS)
Rabir, Mohamad Hairie B.; Jalal Bayar, Abi Muttaqin B.; Hamzah, Na'im Syauqi B.; Mustafa, Muhammad Khairul Ariff B.; Karim, Julia Bt. Abdul; Zin, Muhammad Rawi B. Mohamed; Ismail, Yahya B.; Hussain, Mohd Huzair B.; Mat Husin, Mat Zin B.; Dan, Roslan B. Md; Ismail, Ahmad Razali B.; Husain, Nurfazila Bt.; Jalil Khan, Zareen Khan B. Abdul; Yakin, Shaiful Rizaide B. Mohd; Saad, Mohamad Fauzi B.; Masood, Zarina Bt.
2018-01-01
The in-core thermal neutron flux distribution was determined using measurement and simulation methods for the Malaysian’s PUSPATI TRIGA Reactor (RTP). In this work, online thermal neutron flux measurement using Self Powered Neutron Detector (SPND) has been performed to verify and validate the computational methods for neutron flux calculation in RTP calculations. The experimental results were used as a validation to the calculations performed with Monte Carlo code MCNP. The detail in-core neutron flux distributions were estimated using MCNP mesh tally method. The neutron flux mapping obtained revealed the heterogeneous configuration of the core. Based on the measurement and simulation, the thermal flux profile peaked at the centre of the core and gradually decreased towards the outer side of the core. The results show a good agreement (relatively) between calculation and measurement where both show the same radial thermal flux profile inside the core: MCNP model over estimation with maximum discrepancy around 20% higher compared to SPND measurement. As our model also predicts well the neutron flux distribution in the core it can be used for the characterization of the full core, that is neutron flux and spectra calculation, dose rate calculations, reaction rate calculations, etc.
Attitude motion compensation for imager on Fengyun-4 geostationary meteorological satellite
NASA Astrophysics Data System (ADS)
Lyu, Wang; Dai, Shoulun; Dong, Yaohai; Shen, Yili; Song, Xiaozheng; Wang, Tianshu
2017-09-01
A compensation method is used in Chinese Fengyun-4 satellite to counteracting the line-of-sight influence by attitude motion during imaging. The method is acted on-board by adding the compensation amount to the instrument scanning control circuit. The mathematics simulation and the three-axis air-bearing test results show that the method works effectively.
Passive 3D imaging of nuclear waste containers with Muon Scattering Tomography
NASA Astrophysics Data System (ADS)
Thomay, C.; Velthuis, J.; Poffley, T.; Baesso, P.; Cussans, D.; Frazão, L.
2016-03-01
The non-invasive imaging of dense objects is of particular interest in the context of nuclear waste management, where it is important to know the contents of waste containers without opening them. Using Muon Scattering Tomography (MST), it is possible to obtain a detailed 3D image of the contents of a waste container on reasonable timescales, showing both the high and low density materials inside. We show the performance of such a method on a Monte Carlo simulation of a dummy waste drum object containing objects of different shapes and materials. The simulation has been tuned with our MST prototype detector performance. In particular, we show that both a tungsten penny of 2 cm radius and 1 cm thickness, and a uranium sheet of 0.5 cm thickness can be clearly identified. We also show the performance of a novel edge finding technique, by which the edges of embedded objects can be identified more precisely than by solely using the imaging method.
Estimating variation in a landscape simulation of forest structure.
S. Hummel; P. Cunningham
2006-01-01
Modern technology makes it easy to show how forested landscapes might change with time but it remains difficult to estimate how sampling error affects landscape simulation results. To address this problem we used two methods to project the area in late-sera1 forest (LSF) structure for the same 6070 hectare (ha) study site over 30 years. The site was stratified into...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y.C.; Doolen, G.; Chen, H.H.
A high-order correlation tensor formalism for neural networks is described. The model can simulate auto associative, heteroassociative, as well as multiassociative memory. For the autoassociative model, simulation results show a drastic increase in the memory capacity and speed over that of the standard Hopfield-like correlation matrix methods. The possibility of using multiassociative memory for a learning universal inference network is also discussed. 9 refs., 5 figs.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2017-12-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Fluids density functional theory and initializing molecular dynamics simulations of block copolymers
NASA Astrophysics Data System (ADS)
Brown, Jonathan R.; Seo, Youngmi; Maula, Tiara Ann D.; Hall, Lisa M.
2016-03-01
Classical, fluids density functional theory (fDFT), which can predict the equilibrium density profiles of polymeric systems, and coarse-grained molecular dynamics (MD) simulations, which are often used to show both structure and dynamics of soft materials, can be implemented using very similar bead-based polymer models. We aim to use fDFT and MD in tandem to examine the same system from these two points of view and take advantage of the different features of each methodology. Additionally, the density profiles resulting from fDFT calculations can be used to initialize the MD simulations in a close to equilibrated structure, speeding up the simulations. Here, we show how this method can be applied to study microphase separated states of both typical diblock and tapered diblock copolymers in which there is a region with a gradient in composition placed between the pure blocks. Both methods, applied at constant pressure, predict a decrease in total density as segregation strength or the length of the tapered region is increased. The predictions for the density profiles from fDFT and MD are similar across materials with a wide range of interfacial widths.
Multi-scale sensitivity analysis of pile installation using DEM
NASA Astrophysics Data System (ADS)
Esposito, Ricardo Gurevitz; Velloso, Raquel Quadros; , Eurípedes do Amaral Vargas, Jr.; Danziger, Bernadete Ragoni
2018-07-01
The disturbances experienced by the soil due to the pile installation and dynamic soil-structure interaction still present major challenges to foundation engineers. These phenomena exhibit complex behaviors, difficult to measure in physical tests and to reproduce in numerical models. Due to the simplified approach used by the discrete element method (DEM) to simulate large deformations and nonlinear stress-dilatancy behavior of granular soils, the DEM consists of an excellent tool to investigate these processes. This study presents a sensitivity analysis of the effects of introducing a single pile using the PFC2D software developed by Itasca Co. The different scales investigated in these simulations include point and shaft resistance, alterations in porosity and stress fields and particles displacement. Several simulations were conducted in order to investigate the effects of different numerical approaches showing indications that the method of installation and particle rotation could influence greatly in the conditions around the numerical pile. Minor effects were also noted due to change in penetration velocity and pile-soil friction. The difference in behavior of a moving and a stationary pile shows good qualitative agreement with previous experimental results indicating the necessity of realizing a force equilibrium process prior to any load-test to be simulated.
Stark, Austin C.; Andrews, Casey T.
2013-01-01
Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods – especially with regard to using them to model, for example, intracellular environments – is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields. PMID:24223529
Stark, Austin C; Andrews, Casey T; Elcock, Adrian H
2013-09-10
Coarse-grained (CG) simulation methods are now widely used to model the structure and dynamics of large biomolecular systems. One important issue for using such methods - especially with regard to using them to model, for example, intracellular environments - is to demonstrate that they can reproduce experimental data on the thermodynamics of protein-protein interactions in aqueous solutions. To examine this issue, we describe here simulations performed using the popular coarse-grained MARTINI force field, aimed at computing the thermodynamics of lysozyme and chymotrypsinogen self-interactions in aqueous solution. Using molecular dynamics simulations to compute potentials of mean force between a pair of protein molecules, we show that the original parameterization of the MARTINI force field is likely to significantly overestimate the strength of protein-protein interactions to the extent that the computed osmotic second virial coefficients are orders of magnitude more negative than experimental estimates. We then show that a simple down-scaling of the van der Waals parameters that describe the interactions between protein pseudo-atoms can bring the simulated thermodynamics into much closer agreement with experiment. Overall, the work shows that it is feasible to test explicit-solvent CG force fields directly against thermodynamic data for proteins in aqueous solutions, and highlights the potential usefulness of osmotic second virial coefficient measurements for fully parameterizing such force fields.
Simulation methods to estimate design power: an overview for applied research.
Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E
2011-06-20
Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.
Multiscale Quantum Mechanics/Molecular Mechanics Simulations with Neural Networks.
Shen, Lin; Wu, Jingheng; Yang, Weitao
2016-10-11
Molecular dynamics simulation with multiscale quantum mechanics/molecular mechanics (QM/MM) methods is a very powerful tool for understanding the mechanism of chemical and biological processes in solution or enzymes. However, its computational cost can be too high for many biochemical systems because of the large number of ab initio QM calculations. Semiempirical QM/MM simulations have much higher efficiency. Its accuracy can be improved with a correction to reach the ab initio QM/MM level. The computational cost on the ab initio calculation for the correction determines the efficiency. In this paper we developed a neural network method for QM/MM calculation as an extension of the neural-network representation reported by Behler and Parrinello. With this approach, the potential energy of any configuration along the reaction path for a given QM/MM system can be predicted at the ab initio QM/MM level based on the semiempirical QM/MM simulations. We further applied this method to three reactions in water to calculate the free energy changes. The free-energy profile obtained from the semiempirical QM/MM simulation is corrected to the ab initio QM/MM level with the potential energies predicted with the constructed neural network. The results are in excellent accordance with the reference data that are obtained from the ab initio QM/MM molecular dynamics simulation or corrected with direct ab initio QM/MM potential energies. Compared with the correction using direct ab initio QM/MM potential energies, our method shows a speed-up of 1 or 2 orders of magnitude. It demonstrates that the neural network method combined with the semiempirical QM/MM calculation can be an efficient and reliable strategy for chemical reaction simulations.
Grummer, Jared A; Bryson, Robert W; Reeder, Tod W
2014-03-01
Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.
a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information
NASA Astrophysics Data System (ADS)
Lian, Shizhong; Chen, Jiangping; Luo, Minghai
2016-06-01
Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.
Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K
2013-04-22
A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.
Trajectory control of robot manipulators with closed-kinematic chain mechanism
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Pooran, Farhad J.; Premack, Timothy
1987-01-01
The problem of Cartesian trajectory control of a closed-kinematic chain mechanism robot manipulator, recently built at CAIR to study the assembly of NASA hardware for the future Space Station, is considered. The study is performed by both computer simulation and experimentation for tracking of three different paths: a straight line, a sinusoid, and a circle. Linearization and pole placement methods are employed to design controller gains. Results show that the controllers are robust and there are good agreements between simulation and experimentation. The results also show excellent tracking quality and small overshoots.
A Comparison of Two Balance Calibration Model Building Methods
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Ulbrich, Norbert
2007-01-01
Simulated strain-gage balance calibration data is used to compare the accuracy of two balance calibration model building methods for different noise environments and calibration experiment designs. The first building method obtains a math model for the analysis of balance calibration data after applying a candidate math model search algorithm to the calibration data set. The second building method uses stepwise regression analysis in order to construct a model for the analysis. Four balance calibration data sets were simulated in order to compare the accuracy of the two math model building methods. The simulated data sets were prepared using the traditional One Factor At a Time (OFAT) technique and the Modern Design of Experiments (MDOE) approach. Random and systematic errors were introduced in the simulated calibration data sets in order to study their influence on the math model building methods. Residuals of the fitted calibration responses and other statistical metrics were compared in order to evaluate the calibration models developed with different combinations of noise environment, experiment design, and model building method. Overall, predicted math models and residuals of both math model building methods show very good agreement. Significant differences in model quality were attributable to noise environment, experiment design, and their interaction. Generally, the addition of systematic error significantly degraded the quality of calibration models developed from OFAT data by either method, but MDOE experiment designs were more robust with respect to the introduction of a systematic component of the unexplained variance.
NASA Technical Reports Server (NTRS)
Scott, W. A.
1984-01-01
The propulsion simulator calibration laboratory (PSCL) in which calibrations can be performed to determine the gross thrust and airflow of propulsion simulators installed in wind tunnel models is described. The preliminary checkout, evaluation and calibration of the PSCL's 3 component force measurement system is reported. Methods and equipment were developed for the alignment and calibration of the force measurement system. The initial alignment of the system demonstrated the need for more efficient means of aligning system's components. The use of precision alignment jigs increases both the speed and accuracy with which the system is aligned. The calibration of the force measurement system shows that the methods and equipment for this procedure can be successful.
Nonholonomic Hamiltonian Method for Molecular Dynamics Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Bass, Joseph
2015-06-01
Conventional molecular dynamics simulations of reacting shocks employ a holonomic Hamiltonian formulation: the breaking and forming of covalent bonds is described by potential functions. In general these potential functions: (a) are algebraically complex, (b) must satisfy strict smoothness requirements, and (c) contain many fitted parameters. In recent research the authors have developed a new noholonomic formulation of reacting molecular dynamics. In this formulation bond orders are determined by rate equations and the bonding-debonding process need not be described by differentiable functions. This simplifies the representation of complex chemistry and reduces the number of fitted model parameters. Example applications of the method show molecular level shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
Simulation of Surface Pressure Induced by Vortex/Body Interaction
NASA Astrophysics Data System (ADS)
He, M.; Islam, M.; Veitch, B.; Bose, N.; Colbourne, M. B.; Liu, P.
When a strong vortical wake impacts a structure, the pressure on the impacted surface sees large variations in its amplitude. This pressure fluctuation is one of the main sources causing severe structural vibration and hydrodynamic noise. Economical and effective prediction methods of the fluctuating pressure are required by engineers in many fields. This paper presents a wake impingement model (WIM) that has been incorporated into a panel method code, Propella, and its applications in simulations of a podded propeller wake impacting on a strut. Simulated strut surface pressure distributions and variations are compared with experimental data in terms of time-averaged components and phase-averaged components. The pressure comparisons show that the calculated results are in a good agreement with experimental data.
Simulation of Charged Systems in Heterogeneous Dielectric Media via a True Energy Functional
NASA Astrophysics Data System (ADS)
Jadhao, Vikram; Solis, Francisco J.; de la Cruz, Monica Olvera
2012-11-01
For charged systems in heterogeneous dielectric media, a key obstacle for molecular dynamics (MD) simulations is the need to solve the Poisson equation in the media. This obstacle can be bypassed using MD methods that treat the local polarization charge density as a dynamic variable, but such approaches require access to a true free energy functional, one that evaluates to the equilibrium electrostatic energy at its minimum. In this Letter, we derive the needed functional. As an application, we develop a Car-Parrinello MD method for the simulation of free charges present near a spherical emulsion droplet separating two immiscible liquids with different dielectric constants. Our results show the presence of nonmonotonic ionic profiles in the dielectric with a lower dielectric constant.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
Evaluation of finite difference and FFT-based solutions of the transport of intensity equation.
Zhang, Hongbo; Zhou, Wen-Jing; Liu, Ying; Leber, Donald; Banerjee, Partha; Basunia, Mahmudunnabi; Poon, Ting-Chung
2018-01-01
A finite difference method is proposed for solving the transport of intensity equation. Simulation results show that although slower than fast Fourier transform (FFT)-based methods, finite difference methods are able to reconstruct the phase with better accuracy due to relaxed assumptions for solving the transport of intensity equation relative to FFT methods. Finite difference methods are also more flexible than FFT methods in dealing with different boundary conditions.
Wolf, Eric M.; Causley, Matthew; Christlieb, Andrew; ...
2016-08-09
Here, we propose a new particle-in-cell (PIC) method for the simulation of plasmas based on a recently developed, unconditionally stable solver for the wave equation. This method is not subject to a CFL restriction, limiting the ratio of the time step size to the spatial step size, typical of explicit methods, while maintaining computational cost and code complexity comparable to such explicit schemes. We describe the implementation in one and two dimensions for both electrostatic and electromagnetic cases, and present the results of several standard test problems, showing good agreement with theory with time step sizes much larger than allowedmore » by typical CFL restrictions.« less
NASA Astrophysics Data System (ADS)
Li, Jun-jun; Yang, Xiao-jun; Xiao, Ying-jie; Xu, Bo-wei; Wu, Hua-feng
2018-03-01
Immersed tunnel is an important part of the Hong Kong-Zhuhai-Macao Bridge (HZMB) project. In immersed tunnel floating, translation which includes straight and transverse movements is the main working mode. To decide the magnitude and direction of the towing force for each tug, a particle swarm-based translation control method is presented for non-power immersed tunnel element. A sort of linear weighted logarithmic function is exploited to avoid weak subgoals. In simulation, the particle swarm-based control method is evaluated and compared with traditional empirical method in the case of the HZMB project. Simulation results show that the presented method delivers performance improvement in terms of the enhanced surplus towing force.
A Novel Crosstalk Suppression Method of the 2-D Networked Resistive Sensor Array
Wu, Jianfeng; Wang, Lei; Li, Jianqing; Song, Aiguo
2014-01-01
The 2-D resistive sensor array in the row–column fashion suffered from the crosstalk problem for parasitic parallel paths. Firstly, we proposed an Improved Isolated Drive Feedback Circuit with Compensation (IIDFCC) based on the voltage feedback method to suppress the crosstalk. In this method, a compensated resistor was specially used to reduce the crosstalk caused by the column multiplexer resistors and the adjacent row elements. Then, a mathematical equivalent resistance expression of the element being tested (EBT) of this circuit was analytically derived and verified by the circuit simulations. The simulation results show that the measurement method can greatly reduce the influence on the EBT caused by parasitic parallel paths for the multiplexers' channel resistor and the adjacent elements. PMID:25046011
Incoherent beam combining based on the momentum SPGD algorithm
NASA Astrophysics Data System (ADS)
Yang, Guoqing; Liu, Lisheng; Jiang, Zhenhua; Guo, Jin; Wang, Tingfeng
2018-05-01
Incoherent beam combining (ICBC) technology is one of the most promising ways to achieve high-energy, near-diffraction laser output. In this paper, the momentum method is proposed as a modification of the stochastic parallel gradient descent (SPGD) algorithm. The momentum method can improve the speed of convergence of the combining system efficiently. The analytical method is employed to interpret the principle of the momentum method. Furthermore, the proposed algorithm is testified through simulations as well as experiments. The results of the simulations and the experiments show that the proposed algorithm not only accelerates the speed of the iteration, but also keeps the stability of the combining process. Therefore the feasibility of the proposed algorithm in the beam combining system is testified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
NASA Astrophysics Data System (ADS)
Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi
2017-09-01
A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.
NASA Astrophysics Data System (ADS)
Ambarita, H.; Widodo, T. I.; Nasution, D. M.
2017-01-01
In order to reduce the consumption of fossil fuel of a compression ignition (CI) engines which is usually used in transportation and heavy machineries, it can be operated in dual-fuel mode (diesel-biogas). However, the literature reviews show that the thermal efficiency is lower due to incomplete combustion process. In order to increase the efficiency, the combustion process in the combustion chamber need to be explored. Here, a commercial CFD code is used to explore the combustion process of a small CI engine run on dual fuel mode (diesel-biogas). The turbulent governing equations are solved based on finite volume method. A simulation of compression and expansions strokes at an engine speed and load of 1000 rpm and 2500W, respectively has been carried out. The pressure and temperature distributions and streamlines are plotted. The simulation results show that at engine power of 732.27 Watt the thermal efficiency is 9.05%. The experiment and simulation results show a good agreement. The method developed in this study can be used to investigate the combustion process of CI engine run on dual-fuel mode.
Numerical Simulation of Creep Characteristic for Composite Rock Mass with Weak Interlayer
NASA Astrophysics Data System (ADS)
Li, Jian-guang; Zhang, Zuo-liang; Zhang, Yu-biao; Shi, Xiu-wen; Wei, Jian
2017-06-01
The composite rock mass with weak interlayer is widely exist in engineering, and it’s essential to research the creep behavior which could cause stability problems of rock engineering and production accidents. However, due to it is difficult to take samples, the losses and damages in delivery and machining process, we always cannot get enough natural layered composite rock mass samples, so the indirect test method has been widely used. In this paper, we used ANSYS software (a General Finite Element software produced by American ANSYS, Inc) to carry out the numerical simulation based on the uniaxial compression creep experiments of artificial composite rock mass with weak interlayer, after experimental data fitted. The results show that the laws obtained by numerical simulations and experiments are consistent. Thus confirmed that carry out numerical simulation for the creep characteristics of rock mass with ANSYS software is feasible, and this method can also be extended to other underground engineering of simulate the weak intercalations.
An Object-Oriented Finite Element Framework for Multiphysics Phase Field Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael R Tonks; Derek R Gaston; Paul C Millett
2012-01-01
The phase field approach is a powerful and popular method for modeling microstructure evolution. In this work, advanced numerical tools are used to create a phase field framework that facilitates rapid model development. This framework, called MARMOT, is based on Idaho National Laboratory's finite element Multiphysics Object-Oriented Simulation Environment. In MARMOT, the system of phase field partial differential equations (PDEs) are solved simultaneously with PDEs describing additional physics, such as solid mechanics and heat conduction, using the Jacobian-Free Newton Krylov Method. An object-oriented architecture is created by taking advantage of commonalities in phase fields models to facilitate development of newmore » models with very little written code. In addition, MARMOT provides access to mesh and time step adaptivity, reducing the cost for performing simulations with large disparities in both spatial and temporal scales. In this work, phase separation simulations are used to show the numerical performance of MARMOT. Deformation-induced grain growth and void growth simulations are included to demonstrate the muliphysics capability.« less
Evaluation of null-point detection methods on simulation data
NASA Astrophysics Data System (ADS)
Olshevsky, Vyacheslav; Fu, Huishan; Vaivads, Andris; Khotyaintsev, Yuri; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
We model the measurements of artificial spacecraft that resemble the configuration of CLUSTER propagating in the particle-in-cell simulation of turbulent magnetic reconnection. The simulation domain contains multiple isolated X-type null-points, but the majority are O-type null-points. Simulations show that current pinches surrounded by twisted fields, analogous to laboratory pinches, are formed along the sequences of O-type nulls. In the simulation, the magnetic reconnection is mainly driven by the kinking of the pinches, at spatial scales of several ion inertial lentghs. We compute the locations of magnetic null-points and detect their type. When the satellites are separated by the fractions of ion inertial length, as it is for CLUSTER, they are able to locate both the isolated null-points, and the pinches. We apply the method to the real CLUSTER data and speculate how common are pinches in the magnetosphere, and whether they play a dominant role in the dissipation of magnetic energy.
Test Methods for Robot Agility in Manufacturing.
Downs, Anthony; Harrison, William; Schlenoff, Craig
2016-01-01
The paper aims to define and describe test methods and metrics to assess industrial robot system agility in both simulation and in reality. The paper describes test methods and associated quantitative and qualitative metrics for assessing robot system efficiency and effectiveness which can then be used for the assessment of system agility. The paper describes how the test methods were implemented in a simulation environment and real world environment. It also shows how the metrics are measured and assessed as they would be in a future competition. The test methods described in this paper will push forward the state of the art in software agility for manufacturing robots, allowing small and medium manufacturers to better utilize robotic systems. The paper fulfills the identified need for standard test methods to measure and allow for improvement in software agility for manufacturing robots.
NASA Astrophysics Data System (ADS)
Guang, Chen; Qibo, Feng; Keqin, Ding; Zhan, Gao
2017-10-01
A subpixel displacement measurement method based on the combination of particle swarm optimization (PSO) and gradient algorithm (GA) was proposed for accuracy and speed optimization in GA, which is a subpixel displacement measurement method better applied in engineering practice. An initial integer-pixel value was obtained according to the global searching ability of PSO, and then gradient operators were adopted for a subpixel displacement search. A comparison was made between this method and GA by simulated speckle images and rigid-body displacement in metal specimens. The results showed that the computational accuracy of the combination of PSO and GA method reached 0.1 pixel in the simulated speckle images, or even 0.01 pixels in the metal specimen. Also, computational efficiency and the antinoise performance of the improved method were markedly enhanced.
A Robust Cooperated Control Method with Reinforcement Learning and Adaptive H∞ Control
NASA Astrophysics Data System (ADS)
Obayashi, Masanao; Uchiyama, Shogo; Kuremoto, Takashi; Kobayashi, Kunikazu
This study proposes a robust cooperated control method combining reinforcement learning with robust control to control the system. A remarkable characteristic of the reinforcement learning is that it doesn't require model formula, however, it doesn't guarantee the stability of the system. On the other hand, robust control system guarantees stability and robustness, however, it requires model formula. We employ both the actor-critic method which is a kind of reinforcement learning with minimal amount of computation to control continuous valued actions and the traditional robust control, that is, H∞ control. The proposed system was compared method with the conventional control method, that is, the actor-critic only used, through the computer simulation of controlling the angle and the position of a crane system, and the simulation result showed the effectiveness of the proposed method.
Novel Digital Driving Method Using Dual Scan for Active Matrix Organic Light-Emitting Diode Displays
NASA Astrophysics Data System (ADS)
Jung, Myoung Hoon; Choi, Inho; Chung, Hoon-Ju; Kim, Ohyun
2008-11-01
A new digital driving method has been developed for low-temperature polycrystalline silicon, transistor-driven, active-matrix organic light-emitting diode (AM-OLED) displays by time-ratio gray-scale expression. This driving method effectively increases the emission ratio and the number of subfields by inserting another subfield set into nondisplay periods in the conventional digital driving method. By employing the proposed modified gravity center coding, this method can be used to effectively compensate for dynamic false contour noise. The operation and performance were verified by current measurement and image simulation. The simulation results using eight test images show that the proposed approach improves the average peak signal-to-noise ratio by 2.61 dB, and the emission ratio by 20.5%, compared with the conventional digital driving method.