Hybrid classical/quantum simulation for infrared spectroscopy of water
NASA Astrophysics Data System (ADS)
Maekawa, Yuki; Sasaoka, Kenji; Ube, Takuji; Ishiguro, Takashi; Yamamoto, Takahiro
2018-05-01
We have developed a hybrid classical/quantum simulation method to calculate the infrared (IR) spectrum of water. The proposed method achieves much higher accuracy than conventional classical molecular dynamics (MD) simulations at a much lower computational cost than ab initio MD simulations. The IR spectrum of water is obtained as an ensemble average of the eigenvalues of the dynamical matrix constructed by ab initio calculations, using the positions of oxygen atoms that constitute water molecules obtained from the classical MD simulation. The calculated IR spectrum is in excellent agreement with the experimental IR spectrum.
Chen, Mohan; Vella, Joseph R.; Panagiotopoulos, Athanassios Z.; ...
2015-04-08
The structure and dynamics of liquid lithium are studied using two simulation methods: orbital-free (OF) first-principles molecular dynamics (MD), which employs OF density functional theory (DFT), and classical MD utilizing a second nearest-neighbor embedded-atom method potential. The properties we studied include the dynamic structure factor, the self-diffusion coefficient, the dispersion relation, the viscosity, and the bond angle distribution function. Our simulation results were compared to available experimental data when possible. Each method has distinct advantages and disadvantages. For example, OFDFT gives better agreement with experimental dynamic structure factors, yet is more computationally demanding than classical simulations. Classical simulations can accessmore » a broader temperature range and longer time scales. The combination of first-principles and classical simulations is a powerful tool for studying properties of liquid lithium.« less
A strategy for quantum algorithm design assisted by machine learning
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung
2014-07-01
We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.
Off-diagonal expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Simulation of wave packet tunneling of interacting identical particles
NASA Astrophysics Data System (ADS)
Lozovik, Yu. E.; Filinov, A. V.; Arkhipov, A. S.
2003-02-01
We demonstrate a different method of simulation of nonstationary quantum processes, considering the tunneling of two interacting identical particles, represented by wave packets. The used method of quantum molecular dynamics (WMD) is based on the Wigner representation of quantum mechanics. In the context of this method ensembles of classical trajectories are used to solve quantum Wigner-Liouville equation. These classical trajectories obey Hamiltonian-like equations, where the effective potential consists of the usual classical term and the quantum term, which depends on the Wigner function and its derivatives. The quantum term is calculated using local distribution of trajectories in phase space, therefore, classical trajectories are not independent, contrary to classical molecular dynamics. The developed WMD method takes into account the influence of exchange and interaction between particles. The role of direct and exchange interactions in tunneling is analyzed. The tunneling times for interacting particles are calculated.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
Hanford, Amanda D; O'Connor, Patrick D; Anderson, James B; Long, Lyle N
2008-06-01
In the current study, real gas effects in the propagation of sound waves are simulated using the direct simulation Monte Carlo method for a wide range of frequencies. This particle method allows for treatment of acoustic phenomena at high Knudsen numbers, corresponding to low densities and a high ratio of the molecular mean free path to wavelength. Different methods to model the internal degrees of freedom of diatomic molecules and the exchange of translational, rotational and vibrational energies in collisions are employed in the current simulations of a diatomic gas. One of these methods is the fully classical rigid-rotor/harmonic-oscillator model for rotation and vibration. A second method takes into account the discrete quantum energy levels for vibration with the closely spaced rotational levels classically treated. This method gives a more realistic representation of the internal structure of diatomic and polyatomic molecules. Applications of these methods are investigated in diatomic nitrogen gas in order to study the propagation of sound and its attenuation and dispersion along with their dependence on temperature. With the direct simulation method, significant deviations from continuum predictions are also observed for high Knudsen number flows.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán
2018-04-05
Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.
Bond breaking in epoxy systems: A combined QM/MM approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barr, Stephen A.; Ecker, Allison M.; Berry, Rajiv J., E-mail: Rajiv.Berry@us.af.mil
2016-06-28
A novel method to combine quantum mechanics (QM) and molecular mechanics has been developed to accurately and efficiently account for covalent bond breaking in polymer systems under high strain without the use of predetermined break locations. Use of this method will provide a better fundamental understanding of the mechano-chemical origins of fracture in thermosets. Since classical force fields cannot accurately account for bond breaking, and QM is too demanding to simulate large systems, a hybrid approach is required. In the method presented here, strain is applied to the system using a classical force field, and all bond lengths are monitored.more » When a bond is stretched past a threshold value, a zone surrounding the bond is used in a QM energy minimization to determine which, if any, bonds break. The QM results are then used to reconstitute the system to continue the classical simulation at progressively larger strain until another QM calculation is triggered. In this way, a QM calculation is only computed when and where needed, allowing for efficient simulations. A robust QM method for energy minimization has been determined, as well as appropriate values for the QM zone size and the threshold bond length. Compute times do not differ dramatically from classical molecular mechanical simulations.« less
Quantum approach to classical statistical mechanics.
Somma, R D; Batista, C D; Ortiz, G
2007-07-20
We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.
Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.
Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo
2015-12-01
In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.
Data Analysis Techniques for Physical Scientists
NASA Astrophysics Data System (ADS)
Pruneau, Claude A.
2017-10-01
Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.
First-order design of geodetic networks using the simulated annealing method
NASA Astrophysics Data System (ADS)
Berné, J. L.; Baselga, S.
2004-09-01
The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.
Molecular dynamics simulations of classical sound absorption in a monatomic gas
NASA Astrophysics Data System (ADS)
Ayub, M.; Zander, A. C.; Huang, D. M.; Cazzolato, B. S.; Howard, C. Q.
2018-05-01
Sound wave propagation in argon gas is simulated using molecular dynamics (MD) in order to determine the attenuation of acoustic energy due to classical (viscous and thermal) losses at high frequencies. In addition, a method is described to estimate attenuation of acoustic energy using the thermodynamic concept of exergy. The results are compared against standing wave theory and the predictions of the theory of continuum mechanics. Acoustic energy losses are studied by evaluating various attenuation parameters and by comparing the changes in behavior at three different frequencies. This study demonstrates acoustic absorption effects in a gas simulated in a thermostatted molecular simulation and quantifies the classical losses in terms of the sound attenuation constant. The approach can be extended to further understanding of acoustic loss mechanisms in the presence of nanoscale porous materials in the simulation domain.
Entangled trajectories Hamiltonian dynamics for treating quantum nuclear effects
NASA Astrophysics Data System (ADS)
Smith, Brendan; Akimov, Alexey V.
2018-04-01
A simple and robust methodology, dubbed Entangled Trajectories Hamiltonian Dynamics (ETHD), is developed to capture quantum nuclear effects such as tunneling and zero-point energy through the coupling of multiple classical trajectories. The approach reformulates the classically mapped second-order Quantized Hamiltonian Dynamics (QHD-2) in terms of coupled classical trajectories. The method partially enforces the uncertainty principle and facilitates tunneling. The applicability of the method is demonstrated by studying the dynamics in symmetric double well and cubic metastable state potentials. The methodology is validated using exact quantum simulations and is compared to QHD-2. We illustrate its relationship to the rigorous Bohmian quantum potential approach, from which ETHD can be derived. Our simulations show a remarkable agreement of the ETHD calculation with the quantum results, suggesting that ETHD may be a simple and inexpensive way of including quantum nuclear effects in molecular dynamics simulations.
Controlling lightwave in Riemann space by merging geometrical optics with transformation optics.
Liu, Yichao; Sun, Fei; He, Sailing
2018-01-11
In geometrical optical design, we only need to choose a suitable combination of lenses, prims, and mirrors to design an optical path. It is a simple and classic method for engineers. However, people cannot design fantastical optical devices such as invisibility cloaks, optical wormholes, etc. by geometrical optics. Transformation optics has paved the way for these complicated designs. However, controlling the propagation of light by transformation optics is not a direct design process like geometrical optics. In this study, a novel mixed method for optical design is proposed which has both the simplicity of classic geometrical optics and the flexibility of transformation optics. This mixed method overcomes the limitations of classic optical design; at the same time, it gives intuitive guidance for optical design by transformation optics. Three novel optical devices with fantastic functions have been designed using this mixed method, including asymmetrical transmissions, bidirectional focusing, and bidirectional cloaking. These optical devices cannot be implemented by classic optics alone and are also too complicated to be designed by pure transformation optics. Numerical simulations based on both the ray tracing method and full-wave simulation method are carried out to verify the performance of these three optical devices.
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Higher-order methods for simulations on quantum computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sornborger, A.T.; Stewart, E.D.
1999-09-01
To implement many-qubit gates for use in quantum simulations on quantum computers efficiently, we develop and present methods reexpressing exp[[minus]i(H[sub 1]+H[sub 2]+[center dot][center dot][center dot])[Delta]t] as a product of factors exp[[minus]iH[sub 1][Delta]t], exp[[minus]iH[sub 2][Delta]t],[hor ellipsis], which is accurate to third or fourth order in [Delta]t. The methods we derive are an extended form of the symplectic method, and can also be used for an integration of classical Hamiltonians on classical computers. We derive both integral and irrational methods, and find the most efficient methods in both cases. [copyright] [ital 1999] [ital The American Physical Society
Recent Advances and Perspectives on Nonadiabatic Mixed Quantum-Classical Dynamics.
Crespo-Otero, Rachel; Barbatti, Mario
2018-05-16
Nonadiabatic mixed quantum-classical (NA-MQC) dynamics methods form a class of computational theoretical approaches in quantum chemistry tailored to investigate the time evolution of nonadiabatic phenomena in molecules and supramolecular assemblies. NA-MQC is characterized by a partition of the molecular system into two subsystems: one to be treated quantum mechanically (usually but not restricted to electrons) and another to be dealt with classically (nuclei). The two subsystems are connected through nonadiabatic couplings terms to enforce self-consistency. A local approximation underlies the classical subsystem, implying that direct dynamics can be simulated, without needing precomputed potential energy surfaces. The NA-MQC split allows reducing computational costs, enabling the treatment of realistic molecular systems in diverse fields. Starting from the three most well-established methods-mean-field Ehrenfest, trajectory surface hopping, and multiple spawning-this review focuses on the NA-MQC dynamics methods and programs developed in the last 10 years. It stresses the relations between approaches and their domains of application. The electronic structure methods most commonly used together with NA-MQC dynamics are reviewed as well. The accuracy and precision of NA-MQC simulations are critically discussed, and general guidelines to choose an adequate method for each application are delivered.
NASA Astrophysics Data System (ADS)
Cai, Xiaofeng; Guo, Wei; Qiu, Jing-Mei
2018-02-01
In this paper, we develop a high order semi-Lagrangian (SL) discontinuous Galerkin (DG) method for nonlinear Vlasov-Poisson (VP) simulations without operator splitting. In particular, we combine two recently developed novel techniques: one is the high order non-splitting SLDG transport method (Cai et al. (2017) [4]), and the other is the high order characteristics tracing technique proposed in Qiu and Russo (2017) [29]. The proposed method with up to third order accuracy in both space and time is locally mass conservative, free of splitting error, positivity-preserving, stable and robust for large time stepping size. The SLDG VP solver is applied to classic benchmark test problems such as Landau damping and two-stream instabilities for VP simulations. Efficiency and effectiveness of the proposed scheme is extensively tested. Tremendous CPU savings are shown by comparisons between the proposed SL DG scheme and the classical Runge-Kutta DG method.
Design and application of 3D-printed stepless beam modulators in proton therapy
NASA Astrophysics Data System (ADS)
Lindsay, C.; Kumlin, J.; Martinez, D. M.; Jirasek, A.; Hoehr, C.
2016-06-01
A new method for the design of stepless beam modulators for proton therapy is described and verified. Simulations of the classic designs are compared against the stepless method for various modulation widths which are clinically applicable in proton eye therapy. Three modulator wheels were printed using a Stratasys Objet30 3D printer. The resulting depth dose distributions showed improved uniformity over the classic stepped designs. Simulated results imply a possible improvement in distal penumbra width; however, more accurate measurements are needed to fully verify this effect. Lastly, simulations were done to model bio-equivalence to Co-60 cell kill. A wheel was successfully designed to flatten this metric.
A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.
Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang
2017-07-04
The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.
Realistic finite temperature simulations of magnetic systems using quantum statistics
NASA Astrophysics Data System (ADS)
Bergqvist, Lars; Bergman, Anders
2018-01-01
We have performed realistic atomistic simulations at finite temperatures using Monte Carlo and atomistic spin dynamics simulations incorporating quantum (Bose-Einstein) statistics. The description is much improved at low temperatures compared to classical (Boltzmann) statistics normally used in these kind of simulations, while at higher temperatures the classical statistics are recovered. This corrected low-temperature description is reflected in both magnetization and the magnetic specific heat, the latter allowing for improved modeling of the magnetic contribution to free energies. A central property in the method is the magnon density of states at finite temperatures, and we have compared several different implementations for obtaining it. The method has no restrictions regarding chemical and magnetic order of the considered materials. This is demonstrated by applying the method to elemental ferromagnetic systems, including Fe and Ni, as well as Fe-Co random alloys and the ferrimagnetic system GdFe3.
2D Quantum Simulation of MOSFET Using the Non Equilibrium Green's Function Method
NASA Technical Reports Server (NTRS)
Svizhenko, Alexel; Anantram, M. P.; Govindan, T. R.; Yan, Jerry (Technical Monitor)
2000-01-01
The objectives this viewgraph presentation summarizes include: (1) the development of a quantum mechanical simulator for ultra short channel MOSFET simulation, including theory, physical approximations, and computer code; (2) explore physics that is not accessible by semiclassical methods; (3) benchmarking of semiclassical and classical methods; and (4) study other two-dimensional devices and molecular structure, from discretized Hamiltonian to tight-binding Hamiltonian.
Classical and all-floating FETI methods for the simulation of arterial tissues
Augustin, Christoph M.; Holzapfel, Gerhard A.; Steinbach, Olaf
2015-01-01
High-resolution and anatomically realistic computer models of biological soft tissues play a significant role in the understanding of the function of cardiovascular components in health and disease. However, the computational effort to handle fine grids to resolve the geometries as well as sophisticated tissue models is very challenging. One possibility to derive a strongly scalable parallel solution algorithm is to consider finite element tearing and interconnecting (FETI) methods. In this study we propose and investigate the application of FETI methods to simulate the elastic behavior of biological soft tissues. As one particular example we choose the artery which is – as most other biological tissues – characterized by anisotropic and nonlinear material properties. We compare two specific approaches of FETI methods, classical and all-floating, and investigate the numerical behavior of different preconditioning techniques. In comparison to classical FETI, the all-floating approach has not only advantages concerning the implementation but in many cases also concerning the convergence of the global iterative solution method. This behavior is illustrated with numerical examples. We present results of linear elastic simulations to show convergence rates, as expected from the theory, and results from the more sophisticated nonlinear case where we apply a well-known anisotropic model to the realistic geometry of an artery. Although the FETI methods have a great applicability on artery simulations we will also discuss some limitations concerning the dependence on material parameters. PMID:26751957
Computational Insights into Materials and Interfaces for Capacitive Energy Storage
Zhan, Cheng; Lian, Cheng; Zhang, Yu; Thompson, Matthew W.; Xie, Yu; Wu, Jianzhong; Kent, Paul R. C.; Cummings, Peter T.; Wesolowski, David J.
2017-01-01
Supercapacitors such as electric double‐layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double‐layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte‐Carlo (MC) methods. In recent years, combining first‐principles and classical simulations to investigate the carbon‐based EDLCs has shed light on the importance of quantum capacitance in graphene‐like 2D systems. More recently, the development of joint density functional theory (JDFT) enables self‐consistent electronic‐structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage. PMID:28725531
Thermal helium clusters at 3.2 Kelvin in classical and semiclassical simulations
NASA Astrophysics Data System (ADS)
Schulte, J.
1993-03-01
The thermodynamic stability of4He4-13 at 3.2 K is investigated with the classical Monte Carlo method, with the semiclassical path-integral Monte Carlo (PIMC) method, and with the semiclassical all-order many-body method. In the all-order many-body simulation the dipole-dipole approximation including short-range correction is used. The resulting stability plots are discussed and related to recent TOF experiments by Stephens and King. It is found that with classical Monte Carlo of course the characteristics of the measured mass spectrum cannot be resolved. With PIMC, switching on more and more quantum mechanics. by raising the number of virtual time steps results in more structure in the stability plot, but this did not lead to sufficient agreement with the TOF experiment. Only the all-order many-body method resolved the characteristic structures of the measured mass spectrum, including magic numbers. The result shows the influence of quantum statistics and quantum mechanics on the stability of small neutral helium clusters.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
NASA Astrophysics Data System (ADS)
Drukker, Karen; Hammes-Schiffer, Sharon
1997-07-01
This paper presents an analytical derivation of a multiconfigurational self-consistent-field (MC-SCF) solution of the time-independent Schrödinger equation for nuclear motion (i.e. vibrational modes). This variational MC-SCF method is designed for the mixed quantum/classical molecular dynamics simulation of multiple proton transfer reactions, where the transferring protons are treated quantum mechanically while the remaining degrees of freedom are treated classically. This paper presents a proof that the Hellmann-Feynman forces on the classical degrees of freedom are identical to the exact forces (i.e. the Pulay corrections vanish) when this MC-SCF method is used with an appropriate choice of basis functions. This new MC-SCF method is applied to multiple proton transfer in a protonated chain of three hydrogen-bonded water molecules. The ground state and the first three excited state energies and the ground state forces agree well with full configuration interaction calculations. Sample trajectories are obtained using adiabatic molecular dynamics methods, and nonadiabatic effects are found to be insignificant for these sample trajectories. The accuracy of the excited states will enable this MC-SCF method to be used in conjunction with nonadiabatic molecular dynamics methods. This application differs from previous work in that it is a real-time quantum dynamical nonequilibrium simulation of multiple proton transfer in a chain of water molecules.
NASA Astrophysics Data System (ADS)
Xu, Yang; Song, Kai; Shi, Qiang
2018-03-01
The hydride transfer reaction catalyzed by dihydrofolate reductase is studied using a recently developed mixed quantum-classical method to investigate the nuclear quantum effects on the reaction. Molecular dynamics simulation is first performed based on a two-state empirical valence bond potential to map the atomistic model to an effective double-well potential coupled to a harmonic bath. In the mixed quantum-classical simulation, the hydride degree of freedom is quantized, and the effective harmonic oscillator modes are treated classically. It is shown that the hydride transfer reaction rate using the mapped effective double-well/harmonic-bath model is dominated by the contribution from the ground vibrational state. Further comparison with the adiabatic reaction rate constant based on the Kramers theory confirms that the reaction is primarily vibrationally adiabatic, which agrees well with the high transmission coefficients found in previous theoretical studies. The calculated kinetic isotope effect is also consistent with the experimental and recent theoretical results.
NASA Astrophysics Data System (ADS)
Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei
2018-07-01
Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.
Recommender engine for continuous-time quantum Monte Carlo methods
NASA Astrophysics Data System (ADS)
Huang, Li; Yang, Yi-feng; Wang, Lei
2017-03-01
Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.
Improvement of Simulation Method in Validation of Software of the Coordinate Measuring Systems
NASA Astrophysics Data System (ADS)
Nieciąg, Halina
2015-10-01
Software is used in order to accomplish various tasks at each stage of the functioning of modern measuring systems. Before metrological confirmation of measuring equipment, the system has to be validated. This paper discusses the method for conducting validation studies of a fragment of software to calculate the values of measurands. Due to the number and nature of the variables affecting the coordinate measurement results and the complex character and multi-dimensionality of measurands, the study used the Monte Carlo method of numerical simulation. The article presents an attempt of possible improvement of results obtained by classic Monte Carlo tools. The algorithm LHS (Latin Hypercube Sampling) was implemented as alternative to the simple sampling schema of classic algorithm.
Computational Insights into Materials and Interfaces for Capacitive Energy Storage
Zhan, Cheng; Lian, Cheng; Zhang, Yu; ...
2017-04-24
Supercapacitors such as electric double-layer capacitors (EDLCs) and pseudocapacitors are becoming increasingly important in the field of electrical energy storage. Theoretical study of energy storage in EDLCs focuses on solving for the electric double-layer structure in different electrode geometries and electrolyte components, which can be achieved by molecular simulations such as classical molecular dynamics (MD), classical density functional theory (classical DFT), and Monte-Carlo (MC) methods. In recent years, combining first-principles and classical simulations to investigate the carbon-based EDLCs has shed light on the importance of quantum capacitance in graphene-like 2D systems. More recently, the development of joint density functional theorymore » (JDFT) enables self-consistent electronic-structure calculation for an electrode being solvated by an electrolyte. In contrast with the large amount of theoretical and computational effort on EDLCs, theoretical understanding of pseudocapacitance is very limited. In this review, we first introduce popular modeling methods and then focus on several important aspects of EDLCs including nanoconfinement, quantum capacitance, dielectric screening, and novel 2D electrode design; we also briefly touch upon pseudocapactive mechanism in RuO 2. We summarize and conclude with an outlook for the future of materials simulation and design for capacitive energy storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiebe, J; Department of Physics and Astronomy, University of Calgary, Calgary, AB; Ploquin, N
2014-08-15
Monte Carlo (MC) simulation is accepted as the most accurate method to predict dose deposition when compared to other methods in radiation treatment planning. Current dose calculation algorithms used for treatment planning can become inaccurate when small radiation fields and tissue inhomogeneities are present. At our centre the Novalis Classic linear accelerator (linac) is used for Stereotactic Radiosurgery (SRS). The first MC model to date of the Novalis Classic linac was developed at our centre using the Geant4 Application for Tomographic Emission (GATE) simulation platform. GATE is relatively new, open source MC software built from CERN's Geometry and Tracking 4more » (Geant4) toolkit. The linac geometry was modeled using manufacturer specifications, as well as in-house measurements of the micro MLC's. Among multiple model parameters, the initial electron beam was adjusted so that calculated depth dose curves agreed with measured values. Simulations were run on the European Grid Infrastructure through GateLab. Simulation time is approximately 8 hours on GateLab for a complete head model simulation to acquire a phase space file. Current results have a majority of points within 3% of the measured dose values for square field sizes ranging from 6×6 mm{sup 2} to 98×98 mm{sup 2} (maximum field size on the Novalis Classic linac) at 100 cm SSD. The x-ray spectrum was determined from the MC data as well. The model provides an investigation into GATE'S capabilities and has the potential to be used as a research tool and an independent dose calculation engine for clinical treatment plans.« less
Hirshberg, Barak; Sagiv, Lior; Gerber, R Benny
2017-03-14
Algorithms for quantum molecular dynamics simulations that directly use ab initio methods have many potential applications. In this article, the ab initio classical separable potentials (AICSP) method is proposed as the basis for approximate algorithms of this type. The AICSP method assumes separability of the total time-dependent wave function of the nuclei and employs mean-field potentials that govern the dynamics of each degree of freedom. In the proposed approach, the mean-field potentials are determined by classical ab initio molecular dynamics simulations. The nuclear wave function can thus be propagated in time using the effective potentials generated "on the fly". As a test of the method for realistic systems, calculations of the stationary anharmonic frequencies of hydrogen stretching modes were carried out for several polyatomic systems, including three amino acids and the guanine-cytosine pair of nucleobases. Good agreement with experiments was found. The method scales very favorably with the number of vibrational modes and should be applicable for very large molecules, e.g., peptides. The method should also be applicable for properties such as vibrational line widths and line shapes. Work in these directions is underway.
The ReaxFF reactive force-field: Development, applications, and future directions
Senftle, Thomas; Hong, Sungwook; Islam, Md Mahbubul; ...
2016-03-04
The reactive force-field (ReaxFF) interatomic potential is a powerful computational tool for exploring, developing and optimizing material properties. Methods based on the principles of quantum mechanics (QM), while offering valuable theoretical guidance at the electronic level, are often too computationally intense for simulations that consider the full dynamic evolution of a system. Alternatively, empirical interatomic potentials that are based on classical principles require significantly fewer computational resources, which enables simulations to better describe dynamic processes over longer timeframes and on larger scales. Such methods, however, typically require a predefined connectivity between atoms, precluding simulations that involve reactive events. The ReaxFFmore » method was developed to help bridge this gap. Approaching the gap from the classical side, ReaxFF casts the empirical interatomic potential within a bond-order formalism, thus implicitly describing chemical bonding without expensive QM calculations. As a result, this article provides an overview of the development, application, and future directions of the ReaxFF method.« less
Compressive Spectral Method for the Simulation of the Nonlinear Gravity Waves
Bayındır, Cihan
2016-01-01
In this paper an approach for decreasing the computational effort required for the spectral simulations of the fully nonlinear ocean waves is introduced. The proposed approach utilizes the compressive sampling algorithm and depends on the idea of using a smaller number of spectral components compared to the classical spectral method. After performing the time integration with a smaller number of spectral components and using the compressive sampling technique, it is shown that the ocean wave field can be reconstructed with a significantly better efficiency compared to the classical spectral method. For the sparse ocean wave model in the frequency domain the fully nonlinear ocean waves with Jonswap spectrum is considered. By implementation of a high-order spectral method it is shown that the proposed methodology can simulate the linear and the fully nonlinear ocean waves with negligible difference in the accuracy and with a great efficiency by reducing the computation time significantly especially for large time evolutions. PMID:26911357
NASA Astrophysics Data System (ADS)
Bednar, Earl; Drager, Steven L.
2007-04-01
Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K
2012-04-05
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.
Functionality limit of classical simulated annealing
NASA Astrophysics Data System (ADS)
Hasegawa, M.
2015-09-01
By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.
Quantum and quasi-classical collisional dynamics of O{sub 2}–Ar at high temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulusoy, Inga S.; Center for Computational and Molecular Science and Technology, School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332-0400; Andrienko, Daniil A.
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate verymore » good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.« less
Quantum and quasi-classical collisional dynamics of O2-Ar at high temperatures
NASA Astrophysics Data System (ADS)
Ulusoy, Inga S.; Andrienko, Daniil A.; Boyd, Iain D.; Hernandez, Rigoberto
2016-06-01
A hypersonic vehicle traveling at a high speed disrupts the distribution of internal states in the ambient flow and introduces a nonequilibrium distribution in the post-shock conditions. We investigate the vibrational relaxation in diatom-atom collisions in the range of temperatures between 1000 and 10 000 K by comparing results of extensive fully quantum-mechanical and quasi-classical simulations with available experimental data. The present paper simulates the interaction of molecular oxygen with argon as the first step in developing the aerothermodynamics models based on first principles. We devise a routine to standardize such calculations also for other scattering systems. Our results demonstrate very good agreement of vibrational relaxation time, derived from quantum-mechanical calculations with the experimental measurements conducted in shock tube facilities. At the same time, the quasi-classical simulations fail to accurately predict rates of vibrationally inelastic transitions at temperatures lower than 3000 K. This observation and the computational cost of adopted methods suggest that the next generation of high fidelity thermochemical models should be a combination of quantum and quasi-classical approaches.
A fictitious domain approach for the simulation of dense suspensions
NASA Astrophysics Data System (ADS)
Gallier, Stany; Lemaire, Elisabeth; Lobry, Laurent; Peters, François
2014-01-01
Low Reynolds number concentrated suspensions do exhibit an intricate physics which can be partly unraveled by the use of numerical simulation. To this end, a Lagrange multiplier-free fictitious domain approach is described in this work. Unlike some methods recently proposed, the present approach is fully Eulerian and therefore does not need any transfer between the Eulerian background grid and some Lagrangian nodes attached to particles. Lubrication forces between particles play an important role in the suspension rheology and have been properly accounted for in the model. A robust and effective lubrication scheme is outlined which consists in transposing the classical approach used in Stokesian Dynamics to our present direct numerical simulation. This lubrication model has also been adapted to account for solid boundaries such as walls. Contact forces between particles are modeled using a classical Discrete Element Method (DEM), a widely used method in granular matter physics. Comprehensive validations are presented on various one-particle, two-particle or three-particle configurations in a linear shear flow as well as some O(103) and O(104) particle simulations.
Toward simulating complex systems with quantum effects
NASA Astrophysics Data System (ADS)
Kenion-Hanrath, Rachel Lynn
Quantum effects like tunneling, coherence, and zero point energy often play a significant role in phenomena on the scales of atoms and molecules. However, the exact quantum treatment of a system scales exponentially with dimensionality, making it impractical for characterizing reaction rates and mechanisms in complex systems. An ongoing effort in the field of theoretical chemistry and physics is extending scalable, classical trajectory-based simulation methods capable of capturing quantum effects to describe dynamic processes in many-body systems; in the work presented here we explore two such techniques. First, we detail an explicit electron, path integral (PI)-based simulation protocol for predicting the rate of electron transfer in condensed-phase transition metal complex systems. Using a PI representation of the transferring electron and a classical representation of the transition metal complex and solvent atoms, we compute the outer sphere free energy barrier and dynamical recrossing factor of the electron transfer rate while accounting for quantum tunneling and zero point energy effects. We are able to achieve this employing only a single set of force field parameters to describe the system rather than parameterizing along the reaction coordinate. Following our success in describing a simple model system, we discuss our next steps in extending our protocol to technologically relevant materials systems. The latter half focuses on the Mixed Quantum-Classical Initial Value Representation (MQC-IVR) of real-time correlation functions, a semiclassical method which has demonstrated its ability to "tune'' between quantum- and classical-limit correlation functions while maintaining dynamic consistency. Specifically, this is achieved through a parameter that determines the quantumness of individual degrees of freedom. Here, we derive a semiclassical correction term for the MQC-IVR to systematically characterize the error introduced by different choices of simulation parameters, and demonstrate the ability of this approach to optimize MQC-IVR simulations.
Mora Osorio, Camilo Andrés; González Barrios, Andrés Fernando
2016-12-07
Calculation of the Gibbs free energy changes of biological molecules at the oil-water interface is commonly performed with Molecular Dynamics simulations (MD). It is a process that could be performed repeatedly in order to find some molecules of high stability in this medium. Here, an alternative method of calculation has been proposed: a group contribution method (GCM) for peptides based on MD of the twenty classic amino acids to obtain free energy change during the insertion of any peptide chain in water-dodecane interfaces. Multiple MD of the twenty classic amino acids located at the interface of rectangular simulation boxes with a dodecane-water medium were performed. A GCM to calculate the free energy of entire peptides is then proposed. The method uses the summation of the Gibbs free energy of each amino acid adjusted in function of its presence or absence in the chain as well as its hydrophobic characteristics. Validation of the equation was performed with twenty-one peptides all simulated using MD in dodecane-water rectangular boxes in previous work, obtaining an average relative error of 16%.
Direct Simulation Monte Carlo Application of the Three Dimensional Forced Harmonic Oscillator Model
2017-12-07
quasi -classical scattering theory [3,4] or trajectory [5] calculations, semiclassical, as well as close-coupled [6,7] or full [8] quantum mechanical...the quasi -classical trajectory (QCT) calculations approach for ab initio modeling of collision processes. The DMS method builds on an earlier work...nu ar y 30 , 2 01 8 | h ttp :// ar c. ai aa .o rg | D O I: 1 0. 25 14 /1 .T 52 28 to directly use quasi -classical or quantum mechanic
DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.
2012-01-01
We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694
Optimal and adaptive methods of processing hydroacoustic signals (review)
NASA Astrophysics Data System (ADS)
Malyshkin, G. S.; Sidel'nikov, G. B.
2014-09-01
Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.
Determination of layer ordering using sliding-window Fourier transform of x-ray reflectivity data
NASA Astrophysics Data System (ADS)
Smigiel, E.; Knoll, A.; Broll, N.; Cornet, A.
1998-01-01
X-ray reflectometry allows the determination of the thickness, density and roughness of thin layers on a substrate from several Angstroms to some hundred nanometres. The thickness is determined by simulation with trial-and-error methods after extracting initial values of the layer thicknesses from the result of a classical Fast Fourier Transform (FFT) of the reflectivity data. However, the order information of the layers is lost during classical FFT. The order of the layers has then to be known a priori. In this paper, it will be shown that the order of the layers can be obtained by a sliding-window Fourier transform, the so-called Gabor representation. This joint time-frequency analysis allows the direct determination of the order of the layers and, therefore, the use of a more appropriate starting model for refining simulations. A simulated and a measured example show the interest of this method.
Centering Ability of ProTaper Next and WaveOne Classic in J-Shape Simulated Root Canals
Dioguardi, Mario; Cocco, Armando; Giuliani, Michele; Fabiani, Cristiano; D'Alessandro, Alfonso; Ciavarella, Domenico
2016-01-01
Introduction. The aim of this study was to evaluate and compare the shaping and centering ability of ProTaper Next (PTN; Dentsply Maillefer, Ballaigues, Switzerland) and WaveOne Classic systems (Dentsply Maillefer) in simulated root canals. Methods. Forty J-shaped canals in resin blocks were assigned to two groups (n = 20 for each group). Photographic method was used to record pre- and postinstrumentation images. After superimposition, centering and shaping ability were recorded at 9 different levels from the apex using the software Autocad 2013 (Autodesk Inc., San Rafael, USA). Results. Shaping procedures with ProTaper Next resulted in a lower amount of resin removed at each reference point level. In addition, the pattern of centering ability improved after the use of ProTaper Next in 8 of 9 measurement points. Conclusions. Within the limitations of this study, shaping procedures with ProTaper Next instruments demonstrated a lower amount of resin removed and a better centering ability than WaveOne Classic system. PMID:28054031
Centering Ability of ProTaper Next and WaveOne Classic in J-Shape Simulated Root Canals.
Troiano, Giuseppe; Dioguardi, Mario; Cocco, Armando; Giuliani, Michele; Fabiani, Cristiano; D'Alessandro, Alfonso; Ciavarella, Domenico; Lo Muzio, Lorenzo
Introduction . The aim of this study was to evaluate and compare the shaping and centering ability of ProTaper Next (PTN; Dentsply Maillefer, Ballaigues, Switzerland) and WaveOne Classic systems (Dentsply Maillefer) in simulated root canals. Methods . Forty J-shaped canals in resin blocks were assigned to two groups ( n = 20 for each group). Photographic method was used to record pre- and postinstrumentation images. After superimposition, centering and shaping ability were recorded at 9 different levels from the apex using the software Autocad 2013 (Autodesk Inc., San Rafael, USA). Results . Shaping procedures with ProTaper Next resulted in a lower amount of resin removed at each reference point level. In addition, the pattern of centering ability improved after the use of ProTaper Next in 8 of 9 measurement points. Conclusions . Within the limitations of this study, shaping procedures with ProTaper Next instruments demonstrated a lower amount of resin removed and a better centering ability than WaveOne Classic system.
Density-functional theory simulation of large quantum dots
NASA Astrophysics Data System (ADS)
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
H. T. Schreuder; M. S. Williams
2000-01-01
In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...
Joint Processing of Envelope Alignment and Phase Compensation for Isar Imaging
NASA Astrophysics Data System (ADS)
Chen, Tao; Jin, Guanghu; Dong, Zhen
2018-04-01
Range envelope alignment and phase compensation are spilt into two isolated parts in the classical methods of translational motion compensation in Inverse Synthetic Aperture Radar (ISAR) imaging. In classic method of the rotating object imaging, the two reference points of the envelope alignment and the Phase Difference (PD) estimation are probably not the same point, making it difficult to uncouple the coupling term by conducting the correction of Migration Through Resolution Cell (MTRC). In this paper, an improved approach of joint processing which chooses certain scattering point as the sole reference point is proposed to perform with utilizing the Prominent Point Processing (PPP) method. With this end in view, we firstly get the initial image using classical methods from which a certain scattering point can be chose. The envelope alignment and phase compensation using the selected scattering point as the same reference point are subsequently conducted. The keystone transform is thus smoothly applied to further improve imaging quality. Both simulation experiments and real data processing are provided to demonstrate the performance of the proposed method compared with classical method.
Re'class'ification of 'quant'ified classical simulated annealing
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2009-12-01
We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.
Temme, K; Osborne, T J; Vollbrecht, K G; Poulin, D; Verstraete, F
2011-03-03
The original motivation to build a quantum computer came from Feynman, who imagined a machine capable of simulating generic quantum mechanical systems--a task that is believed to be intractable for classical computers. Such a machine could have far-reaching applications in the simulation of many-body quantum physics in condensed-matter, chemical and high-energy systems. Part of Feynman's challenge was met by Lloyd, who showed how to approximately decompose the time evolution operator of interacting quantum particles into a short sequence of elementary gates, suitable for operation on a quantum computer. However, this left open the problem of how to simulate the equilibrium and static properties of quantum systems. This requires the preparation of ground and Gibbs states on a quantum computer. For classical systems, this problem is solved by the ubiquitous Metropolis algorithm, a method that has basically acquired a monopoly on the simulation of interacting particles. Here we demonstrate how to implement a quantum version of the Metropolis algorithm. This algorithm permits sampling directly from the eigenstates of the Hamiltonian, and thus evades the sign problem present in classical simulations. A small-scale implementation of this algorithm should be achievable with today's technology.
Computer simulation of liquid metals
NASA Astrophysics Data System (ADS)
Belashchenko, D. K.
2013-12-01
Methods for and the results of the computer simulation of liquid metals are reviewed. Two basic methods, classical molecular dynamics with known interparticle potentials and the ab initio method, are considered. Most attention is given to the simulated results obtained using the embedded atom model (EAM). The thermodynamic, structural, and diffusion properties of liquid metal models under normal and extreme (shock) pressure conditions are considered. Liquid-metal simulated results for the Groups I - IV elements, a number of transition metals, and some binary systems (Fe - C, Fe - S) are examined. Possibilities for the simulation to account for the thermal contribution of delocalized electrons to energy and pressure are considered. Solidification features of supercooled metals are also discussed.
Computer Simulation of Classic Studies in Psychology.
ERIC Educational Resources Information Center
Bradley, Drake R.
This paper describes DATASIM, a comprehensive software package which generates simulated data for actual or hypothetical research designs. DATASIM is primarily intended for use in statistics and research methods courses, where it is used to generate "individualized" datasets for students to analyze, and later to correct their answers.…
Report on the Implementation of Homogeneous Nucleation Scheme in MARMOT-based Phase Field Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang Y.; Sun, Xin
2013-09-30
In this report, we summarized our effort in developing mesoscale phase field models for predicting precipitation kinetics in alloys during thermal aging and/or under irradiation in nuclear reactors. The first part focused on developing a method to predict the thermodynamic properties of critical nuclei such as the sizes and concentration profiles of critical nuclei, and nucleation barrier. These properties are crucial for quantitative simulations of precipitate evolution kinetics with phase field models. Fe-Cr alloy was chosen as a model alloy because it has valid thermodynamic and kinetic data as well as it is an important structural material in nuclear reactors.more » A constrained shrinking dimer dynamics (CSDD) method was developed to search for the energy minimum path during nucleation. With the method we are able to predict the concentration profiles of the critical nuclei of Cr-rich precipitates and nucleation energy barriers. Simulations showed that Cr concentration distribution in the critical nucleus strongly depends on the overall Cr concentration as well as temperature. The Cr concentration inside the critical nucleus is much smaller than the equilibrium concentration calculated by the equilibrium phase diagram. This implies that a non-classical nucleation theory should be used to deal with the nucleation of Cr precipitates in Fe-Cr alloys. The growth kinetics of both classical and non-classical nuclei was investigated by the phase field approach. A number of interesting phenomena were observed from the simulations: 1) a critical classical nucleus first shrinks toward its non-classical nucleus and then grows; 2) a non-classical nucleus has much slower growth kinetics at its earlier growth stage compared to the diffusion-controlled growth kinetics. 3) a critical classical nucleus grows faster at the earlier growth stage than the non-classical nucleus. All of these results demonstrated that it is critical to introduce the correct critical nuclei into phase field modeling in order to correctly capture the kinetics of precipitation. In most alloys the matrix phase and precipitate phase have different concentrations as well as different crystal structures. For example, Cu precipitates in FeCu alloys have fcc crystal structure while the matrix Fe-Cu solid solution has bcc structure at low temperature. The WBM model and KimS model, where both concentrations and order parameters are chosen to describe the microstructures, are commonly used to model precipitations in such alloys. The WBM and KimS models have not been implemented into Marmot yet. In the second part of this report, we focused on implementing the WBM and KimS models into Marmot. The Fe-Cu alloys, which are important structure materials in nuclear reactors, was taken as the model alloys to test the models.« less
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Hybrid molecular dynamics simulation for plasma induced damage analysis
NASA Astrophysics Data System (ADS)
Matsukuma, Masaaki
2016-09-01
In order to enable further device size reduction (also known as Moore's law) and improved power performance, the semiconductor industry is introducing new materials and device structures into the semiconductor fabrication process. Materials now include III-V compounds, germanium, cobalt, ruthenium, hafnium, and others. The device structure in both memory and logic has been evolving from planar to three dimensional (3D). One such device is the FinFET, where the transistor gate is a vertical fin made either of silicon, silicon-germanium or germanium. These changes have brought renewed interests in the structural damages caused by energetic ion bombardment of the fin sidewalls which are exposed to the ion flux from the plasma during the fin-strip off step. Better control of the physical damage of the 3D devices requires a better understanding of the damage formation mechanisms on such new materials and structures. In this study, the damage formation processes by ion bombardment have been simulated for Si and Ge substrate by Quantum Mechanics/Molecular Mechanics (QM/MM) hybrid simulations and compared to the results from the classical molecular dynamics (MD) simulations. In our QM/MM simulations, the highly reactive region in which the structural damage is created is simulated with the Density Functional based Tight Binding (DFTB) method and the region remote from the primary region is simulated using classical MD with the Stillinger-Weber and Moliere potentials. The learn on the fly method is also used to reduce the computational load. Hence our QM/MM simulation is much faster than the full QC-MD simulations and the original QM/MM simulations. The amorphous layers profile simulated with QM/MM have obvious differences in their thickness for silicon and germanium substrate. The profile of damaged structure in the germanium substrate is characterized by a deeper tail then in silicon. These traits are also observed in the results from the mass selected ion beam experiments. This observed damage profile dependence on species and substrate cannot be reproduced using classical MD simulations. While the Moliere potential is convenient to describe the interactions between halogens and other atoms, more accurate interatomic modeling such as DFTB method which takes the molecular orbitals into account should be utilized to make the simulations more realistic. Based on the simulations results, the damage formation scenario will be discussed.
A new time domain random walk method for solute transport in 1-D heterogeneous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banton, O.; Delay, F.; Porel, G.
A new method to simulate solute transport in 1-D heterogeneous media is presented. This time domain random walk method (TDRW), similar in concept to the classical random walk method, calculates the arrival time of a particle cloud at a given location (directly providing the solute breakthrough curve). The main advantage of the method is that the restrictions on the space increments and the time steps which exist with the finite differences and random walk methods are avoided. In a homogeneous zone, the breakthrough curve (BTC) can be calculated directly at a given distance using a few hundred particles or directlymore » at the boundary of the zone. Comparisons with analytical solutions and with the classical random walk method show the reliability of this method. The velocity and dispersivity calculated from the simulated results agree within two percent with the values used as input in the model. For contrasted heterogeneous media, the random walk can generate high numerical dispersion, while the time domain approach does not.« less
NASA Astrophysics Data System (ADS)
Jang, T. S.
2018-03-01
A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.
Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations
NASA Astrophysics Data System (ADS)
Mitry, Mina
Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.
NASA Astrophysics Data System (ADS)
García-Vela, A.
2000-05-01
A definition of a quantum-type phase-space distribution is proposed in order to represent the initial state of the system in a classical dynamics simulation. The central idea is to define an initial quantum phase-space state of the system as the direct product of the coordinate and momentum representations of the quantum initial state. The phase-space distribution is then obtained as the square modulus of this phase-space state. The resulting phase-space distribution closely resembles the quantum nature of the system initial state. The initial conditions are sampled with the distribution, using a grid technique in phase space. With this type of sampling the distribution of initial conditions reproduces more faithfully the shape of the original phase-space distribution. The method is applied to generate initial conditions describing the three-dimensional state of the Ar-HCl cluster prepared by ultraviolet excitation. The photodissociation dynamics is simulated by classical trajectories, and the results are compared with those of a wave packet calculation. The classical and quantum descriptions are found in good agreement for those dynamical events less subject to quantum effects. The classical result fails to reproduce the quantum mechanical one for the more strongly quantum features of the dynamics. The properties and applicability of the phase-space distribution and the sampling technique proposed are discussed.
Simulation of vibrational dephasing of I(2) in solid Kr using the semiclassical Liouville method.
Riga, Jeanne M; Fredj, Erick; Martens, Craig C
2006-02-14
In this paper, we present simulations of the decay of quantum coherence between vibrational states of I(2) in its ground (X) electronic state embedded in a cryogenic Kr matrix. We employ a numerical method based on the semiclassical limit of the quantum Liouville equation, which allows the simulation of the evolution and decay of quantum vibrational coherence using classical trajectories and ensemble averaging. The vibrational level-dependent interaction of the I(2)(X) oscillator with the rare-gas environment is modeled using a recently developed method for constructing state-dependent many-body potentials for quantum vibrations in a many-body classical environment [J. M. Riga, E. Fredj, and C. C. Martens, J. Chem. Phys. 122, 174107 (2005)]. The vibrational dephasing rates gamma(0n) for coherences prepared between the ground vibrational state mid R:0 and excited vibrational state mid R:n are calculated as a function of n and lattice temperature T. Excellent agreement with recent experiments performed by Karavitis et al. [Phys. Chem. Chem. Phys. 7, 791 (2005)] is obtained.
Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.
Gao, J
2016-01-01
Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu
2008-01-01
A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Jincheng; Rimsza, Jessica
Computational simulations at the atomistic level play an increasing important role in understanding the structures, behaviors, and the structure-property relationships of glass and amorphous materials. In this paper, we reviewed atomistic simulation methods ranging from first principles calculations and ab initio molecular dynamics (AIMD), to classical molecular dynamics (MD) and meso-scale kinetic Monte Carlo (KMC) simulations and their applications to glass-water interactions and glass dissolutions. Particularly, the use of these simulation methods in understanding the reaction mechanisms of water with oxide glasses, water-glass interfaces, hydrated porous silica gels formation, the structure and properties of multicomponent glasses, and microstructure evolution aremore » reviewed. Here, the advantages and disadvantageous of these methods are discussed and the current challenges and future direction of atomistic simulations in glass dissolution are presented.« less
NASA Astrophysics Data System (ADS)
Xing, Guan; Wu, Guo-Zhen
2001-02-01
A classical coset Hamiltonian is introduced for the system of one electron in multi-sites. By this Hamiltonian, the dynamical behaviour of the electronic motion can be readily simulated. The simulation reproduces the retardation of the electron density decay in a lattice with site energies randomly distributed - an analogy with Anderson localization. This algorithm is also applied to reproduce the Hammett equation which relates the reaction rate with the property of the substitutions in the organic chemical reactions. The advantages and shortcomings of this algorithm, as contrasted with traditional quantum methods such as the molecular orbital theory, are also discussed.
Exact and approximate stochastic simulation of intracellular calcium dynamics.
Wieder, Nicolas; Fink, Rainer H A; Wegner, Frederic von
2011-01-01
In simulations of chemical systems, the main task is to find an exact or approximate solution of the chemical master equation (CME) that satisfies certain constraints with respect to computation time and accuracy. While Brownian motion simulations of single molecules are often too time consuming to represent the mesoscopic level, the classical Gillespie algorithm is a stochastically exact algorithm that provides satisfying results in the representation of calcium microdomains. Gillespie's algorithm can be approximated via the tau-leap method and the chemical Langevin equation (CLE). Both methods lead to a substantial acceleration in computation time and a relatively small decrease in accuracy. Elimination of the noise terms leads to the classical, deterministic reaction rate equations (RRE). For complex multiscale systems, hybrid simulations are increasingly proposed to combine the advantages of stochastic and deterministic algorithms. An often used exemplary cell type in this context are striated muscle cells (e.g., cardiac and skeletal muscle cells). The properties of these cells are well described and they express many common calcium-dependent signaling pathways. The purpose of the present paper is to provide an overview of the aforementioned simulation approaches and their mutual relationships in the spectrum ranging from stochastic to deterministic algorithms.
Zonal wavefront reconstruction in quadrilateral geometry for phase measuring deflectometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lei; Xue, Junpeng; Gao, Bo
2017-06-14
There are wide applications for zonal reconstruction methods in slope-based metrology due to its good capability of reconstructing the local details on surface profile. It was noticed in the literature that large reconstruction errors occur when using zonal reconstruction methods designed for rectangular geometry to process slopes in a quadrilateral geometry, which is a more general geometry with phase measuring deflectometry. In this paper, we present a new idea for the zonal methods for quadrilateral geometry. Instead of employing the intermediate slopes to set up height-slope equations, we consider the height increment as a more general connector to establish themore » height-slope relations for least-squares regression. The classical zonal methods and interpolation-assisted zonal methods are compared with our proposal. Results of both simulation and experiment demonstrate the effectiveness of the proposed idea. In implementation, the modification on the classical zonal methods is addressed. Finally, the new methods preserve many good aspects of the classical ones, such as the ability to handle a large incomplete slope dataset in an arbitrary aperture, and the low computational complexity comparable with the classical zonal method. Of course, the accuracy of the new methods is much higher when integrating the slopes in quadrilateral geometry.« less
Raskin, Cody; Owen, J. Michael
2016-10-24
Here, we discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extensionmore » of SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
Engel, Hamutal; Doron, Dvir; Kohen, Amnon; Major, Dan Thomas
2012-04-10
The inclusion of nuclear quantum effects such as zero-point energy and tunneling is of great importance in studying condensed phase chemical reactions involving the transfer of protons, hydrogen atoms, and hydride ions. In the current work, we derive an efficient quantum simulation approach for the computation of the momentum distribution in condensed phase chemical reactions. The method is based on a quantum-classical approach wherein quantum and classical simulations are performed separately. The classical simulations use standard sampling techniques, whereas the quantum simulations employ an open polymer chain path integral formulation which is computed using an efficient Monte Carlo staging algorithm. The approach is validated by applying it to a one-dimensional harmonic oscillator and symmetric double-well potential. Subsequently, the method is applied to the dihydrofolate reductase (DHFR) catalyzed reduction of 7,8-dihydrofolate by nicotinamide adenine dinucleotide phosphate hydride (NADPH) to yield S-5,6,7,8-tetrahydrofolate and NADP(+). The key chemical step in the catalytic cycle of DHFR involves a stereospecific hydride transfer. In order to estimate the amount of quantum delocalization, we compute the position and momentum distributions for the transferring hydride ion in the reactant state (RS) and transition state (TS) using a recently developed hybrid semiempirical quantum mechanics-molecular mechanics potential energy surface. Additionally, we examine the effect of compression of the donor-acceptor distance (DAD) in the TS on the momentum distribution. The present results suggest differential quantum delocalization in the RS and TS, as well as reduced tunneling upon DAD compression.
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
High temperature phonon dispersion in graphene using classical molecular dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anees, P., E-mail: anees@igcar.gov.in; Panigrahi, B. K.; Valsakumar, M. C., E-mail: anees@igcar.gov.in
2014-04-24
Phonon dispersion and phonon density of states of graphene are calculated using classical molecular dynamics simulations. In this method, the dynamical matrix is constructed based on linear response theory by computing the displacement of atoms during the simulations. The computed phonon dispersions show excellent agreement with experiments. The simulations are done in both NVT and NPT ensembles at 300 K and found that the LO/TO modes are getting hardened at the Γ point. The NPT ensemble simulations capture the anharmonicity of the crystal accurately and the hardening of LO/TO modes is more pronounced. We also found that at 300 Kmore » the C-C bond length reduces below the equilibrium value and the ZA bending mode frequency becomes imaginary close to Γ along K-Γ direction, which indicates instability of the flat 2D graphene sheets.« less
NASA Astrophysics Data System (ADS)
Elliott, Thomas J.; Gu, Mile
2018-03-01
Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.
2016-01-01
The nucleation of crystals in liquids is one of nature’s most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments. PMID:27228560
Sosso, Gabriele C; Chen, Ji; Cox, Stephen J; Fitzner, Martin; Pedevilla, Philipp; Zen, Andrea; Michaelides, Angelos
2016-06-22
The nucleation of crystals in liquids is one of nature's most ubiquitous phenomena, playing an important role in areas such as climate change and the production of drugs. As the early stages of nucleation involve exceedingly small time and length scales, atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization. In this review, we take stock of the numerous molecular dynamics simulations that, in the past few decades, have unraveled crucial aspects of crystal nucleation in liquids. We put into context the theoretical framework of classical nucleation theory and the state-of-the-art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions. We shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios, ranging from colloidal particles to natural gas hydrates, and that, as a result, the general applicability of classical nucleation theory has been repeatedly called into question. We have attempted to identify the most pressing open questions in the field. We believe that, by improving (i) existing interatomic potentials and (ii) currently available enhanced sampling methods, the community can move toward accurate investigations of realistic systems of practical interest, thus bringing simulations a step closer to experiments.
NASA Astrophysics Data System (ADS)
Baroni, Stefano
Modern simulation methods based on electronic-structure theory have long been deemed unfit to compute heat transport coefficients within the Green-Kubo formalism. This is so because the quantum-mechanical energy density from which the heat flux is derived is inherently ill defined, thus allegedly hampering the use of the Green-Kubo formula. While this objection would actually apply to classical systems as well, I will demonstrate that the thermal conductivity is indeed independent of the specific microscopic expression for the energy density and current from which it is derived. This fact results from a kind of gauge invariance stemming from energy conservation and extensivity, which I will illustrate numerically for a classical Lennard-Jones fluid. I will then introduce an expression for the adiabatic energy flux, derived within density-functional theory, that allows simulating atomic heat transport using equilibrium ab initio molecular dynamics. The resulting methodology is demonstrated by comparing results from ab-initio and classical molecular-dynamics simulations of a model liquid-Argon system, for which accurate inter-atomic potentials are derived by the force-matching method, and applied to compute the thermal conductivity of heavy water at ambient conditions. The problem of evaluating transport coefficients along with their accuracy from relatively short trajectories is finally addressed and discussed with a few representative examples. Partially funded by the European Union through the MaX Centre of Excellence (Grant No. 676598).
Time Hierarchies and Model Reduction in Canonical Non-linear Models
Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto
2016-01-01
The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665
Isogeometric analysis and harmonic stator-rotor coupling for simulating electric machines
NASA Astrophysics Data System (ADS)
Bontinck, Zeger; Corno, Jacopo; Schöps, Sebastian; De Gersem, Herbert
2018-06-01
This work proposes Isogeometric Analysis as an alternative to classical finite elements for simulating electric machines. Through the spline-based Isogeometric discretization it is possible to parametrize the circular arcs exactly, thereby avoiding any geometrical error in the representation of the air gap where a high accuracy is mandatory. To increase the generality of the method, and to allow rotation, the rotor and the stator computational domains are constructed independently as multipatch entities. The two subdomains are then coupled using harmonic basis functions at the interface which gives rise to a saddle-point problem. The properties of Isogeometric Analysis combined with harmonic stator-rotor coupling are presented. The results and performance of the new approach are compared to the ones for a classical finite element method using a permanent magnet synchronous machine as an example.
Time Domain Stability Margin Assessment Method
NASA Technical Reports Server (NTRS)
Clements, Keith
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Real-time dynamics of matrix quantum mechanics beyond the classical approximation
NASA Astrophysics Data System (ADS)
Buividovich, Pavel; Hanada, Masanori; Schäfer, Andreas
2018-03-01
We describe a numerical method which allows to go beyond the classical approximation for the real-time dynamics of many-body systems by approximating the many-body Wigner function by the most general Gaussian function with time-dependent mean and dispersion. On a simple example of a classically chaotic system with two degrees of freedom we demonstrate that this Gaussian state approximation is accurate for significantly smaller field strengths and longer times than the classical one. Applying this approximation to matrix quantum mechanics, we demonstrate that the quantum Lyapunov exponents are in general smaller than their classical counterparts, and even seem to vanish below some temperature. This behavior resembles the finite-temperature phase transition which was found for this system in Monte-Carlo simulations, and ensures that the system does not violate the Maldacena-Shenker-Stanford bound λL < 2πT, which inevitably happens for classical dynamics at sufficiently small temperatures.
Adaptive kernel function using line transect sampling
NASA Astrophysics Data System (ADS)
Albadareen, Baker; Ismail, Noriszura
2018-04-01
The estimation of f(0) is crucial in the line transect method which is used for estimating population abundance in wildlife survey's. The classical kernel estimator of f(0) has a high negative bias. Our study proposes an adaptation in the kernel function which is shown to be more efficient than the usual kernel estimator. A simulation study is adopted to compare the performance of the proposed estimators with the classical kernel estimators.
Convergence acceleration of molecular dynamics methods for shocked materials using velocity scaling
NASA Astrophysics Data System (ADS)
Taylor, DeCarlos E.
2017-03-01
In this work, a convergence acceleration method applicable to extended system molecular dynamics techniques for shock simulations of materials is presented. The method uses velocity scaling to reduce the instantaneous value of the Rankine-Hugoniot conservation of energy constraint used in extended system molecular dynamics methods to more rapidly drive the system towards a converged Hugoniot state. When used in conjunction with the constant stress Hugoniostat method, the velocity scaled trajectories show faster convergence to the final Hugoniot state with little difference observed in the converged Hugoniot energy, pressure, volume and temperature. A derivation of the scale factor is presented and the performance of the technique is demonstrated using the boron carbide armour ceramic as a test material. It is shown that simulation of boron carbide Hugoniot states, from 5 to 20 GPa, using both a classical Tersoff potential and an ab initio density functional, are more rapidly convergent when the velocity scaling algorithm is applied. The accelerated convergence afforded by the current algorithm enables more rapid determination of Hugoniot states thus reducing the computational demand of such studies when using expensive ab initio or classical potentials.
Atomistic Computer Simulations of Water Interactions and Dissolution of Inorganic Glasses
Du, Jincheng; Rimsza, Jessica
2017-09-01
Computational simulations at the atomistic level play an increasing important role in understanding the structures, behaviors, and the structure-property relationships of glass and amorphous materials. In this paper, we reviewed atomistic simulation methods ranging from first principles calculations and ab initio molecular dynamics (AIMD), to classical molecular dynamics (MD) and meso-scale kinetic Monte Carlo (KMC) simulations and their applications to glass-water interactions and glass dissolutions. Particularly, the use of these simulation methods in understanding the reaction mechanisms of water with oxide glasses, water-glass interfaces, hydrated porous silica gels formation, the structure and properties of multicomponent glasses, and microstructure evolution aremore » reviewed. Here, the advantages and disadvantageous of these methods are discussed and the current challenges and future direction of atomistic simulations in glass dissolution are presented.« less
Modeling and simulating industrial land-use evolution in Shanghai, China
NASA Astrophysics Data System (ADS)
Qiu, Rongxu; Xu, Wei; Zhang, John; Staenz, Karl
2018-01-01
This study proposes a cellular automata-based Industrial and Residential Land Use Competition Model to simulate the dynamic spatial transformation of industrial land use in Shanghai, China. In the proposed model, land development activities in a city are delineated as competitions among different land-use types. The Hedonic Land Pricing Model is adopted to implement the competition framework. To improve simulation results, the Land Price Agglomeration Model was devised to simulate and adjust classic land price theory. A new evolutionary algorithm-based parameter estimation method was devised in place of traditional methods. Simulation results show that the proposed model closely resembles actual land transformation patterns and the model can not only simulate land development, but also redevelopment processes in metropolitan areas.
Monteiro, C A
1991-01-01
Two methods for estimating the prevalence of growth retardation in a population are evaluated: the classical method, which is based on the proportion of children whose height is more than 2 standard deviations below the expected mean of a reference population; and a new method recently proposed by Mora, which is based on the whole height distribution of observed and reference populations. Application of the classical method to several simulated populations leads to the conclusion that in most situations in developing countries the prevalence of growth retardation is grossly underestimated, and reflects only the presence of severe growth deficits. A second constraint with this method is a marked reduction of the relative differentials between more and less exposed strata. Application of Mora's method to the same simulated populations reduced but did not eliminate these constraints. A novel method for estimating the prevalence of growth retardation, which is based also on the whole height distribution of observed and reference populations, is also described and evaluated. This method produces better estimates of the true prevalence of growth retardation with no reduction in relative differentials.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics.
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-10-17
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law.
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-01-01
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law. PMID:27748418
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutjahr, A.L.; Kincaid, C.T.; Mercer, J.W.
1987-04-01
The objective of this report is to summarize the various modeling approaches that were used to simulate solute transport in a variably saturated emission. In particular, the technical strengths and weaknesses of each approach are discussed, and conclusions and recommendations for future studies are made. Five models are considered: (1) one-dimensional analytical and semianalytical solutions of the classical deterministic convection-dispersion equation (van Genuchten, Parker, and Kool, this report ); (2) one-dimensional simulation using a continuous-time Markov process (Knighton and Wagenet, this report); (3) one-dimensional simulation using the time domain method and the frequency domain method (Duffy and Al-Hassan, this report);more » (4) one-dimensional numerical approach that combines a solution of the classical deterministic convection-dispersion equation with a chemical equilibrium speciation model (Cederberg, this report); and (5) three-dimensional numerical solution of the classical deterministic convection-dispersion equation (Huyakorn, Jones, Parker, Wadsworth, and White, this report). As part of the discussion, the input data and modeling results are summarized. The models were used in a data analysis mode, as opposed to a predictive mode. Thus, the following discussion will concentrate on the data analysis aspects of model use. Also, all the approaches were similar in that they were based on a convection-dispersion model of solute transport. Each discussion addresses the modeling approaches in the order listed above.« less
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raskin, Cody; Owen, J. Michael, E-mail: raskin1@llnl.gov, E-mail: mikeowen@llnl.gov
2016-11-01
We discuss a generalization of the classic Keplerian disk test problem allowing for both pressure and rotational support, as a method of testing astrophysical codes incorporating both gravitation and hydrodynamics. We argue for the inclusion of pressure in rotating disk simulations on the grounds that realistic, astrophysical disks exhibit non-negligible pressure support. We then apply this test problem to examine the performance of various smoothed particle hydrodynamics (SPH) methods incorporating a number of improvements proposed over the years to address problems noted in modeling the classical gravitation-only Keplerian disk. We also apply this test to a newly developed extension ofmore » SPH based on reproducing kernels called CRKSPH. Counterintuitively, we find that pressure support worsens the performance of traditional SPH on this problem, causing unphysical collapse away from the steady-state disk solution even more rapidly than the purely gravitational problem, whereas CRKSPH greatly reduces this error.« less
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-07
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007)]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H(5)(+) complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H(5)(+) complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011)] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
NASA Astrophysics Data System (ADS)
Gómez-Carrasco, Susana; González-Sánchez, Lola; Aguado, Alfredo; Sanz-Sanz, Cristina; Zanchet, Alexandre; Roncero, Octavio
2012-09-01
In this work we present a dynamically biased statistical model to describe the evolution of the title reaction from statistical to a more direct mechanism, using quasi-classical trajectories (QCT). The method is based on the one previously proposed by Park and Light [J. Chem. Phys. 126, 044305 (2007), 10.1063/1.2430711]. A recent global potential energy surface is used here to calculate the capture probabilities, instead of the long-range ion-induced dipole interactions. The dynamical constraints are introduced by considering a scrambling matrix which depends on energy and determine the probability of the identity/hop/exchange mechanisms. These probabilities are calculated using QCT. It is found that the high zero-point energy of the fragments is transferred to the rest of the degrees of freedom, what shortens the lifetime of H_5^+ complexes and, as a consequence, the exchange mechanism is produced with lower proportion. The zero-point energy (ZPE) is not properly described in quasi-classical trajectory calculations and an approximation is done in which the initial ZPE of the reactants is reduced in QCT calculations to obtain a new ZPE-biased scrambling matrix. This reduction of the ZPE is explained by the need of correcting the pure classical level number of the H_5^+ complex, as done in classical simulations of unimolecular processes and to get equivalent quantum and classical rate constants using Rice-Ramsperger-Kassel-Marcus theory. This matrix allows to obtain a ratio of hop/exchange mechanisms, α(T), in rather good agreement with recent experimental results by Crabtree et al. [J. Chem. Phys. 134, 194311 (2011), 10.1063/1.3587246] at room temperature. At lower temperatures, however, the present simulations predict too high ratios because the biased scrambling matrix is not statistical enough. This demonstrates the importance of applying quantum methods to simulate this reaction at the low temperatures of astrophysical interest.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2015-04-15
By means of molecular dynamics (MD) simulations, we demonstrate that porous graphene can efficiently separate gases according to their molecular sizes. The flux sequence from the classical MD simulation is H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4}, which generally follows the trend in the kinetic diameters. This trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO{sub 2}/N{sub 2} mixtures further demonstrate themore » separation capability of nanoporous graphene. - Graphical abstract: Classical molecular dynamics simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene, in excellent agreement with a recent experiment. - Highlights: • Classical MD simulations show the flux trend of H{sub 2}>CO{sub 2}≫N{sub 2}>Ar>CH{sub 4} for their permeation through a porous graphene. • Free energy calculations yield permeation barriers for those gases. • Selectivities for several gas pairs are estimated from the free-energy barriers and the kinetic theory of gases. • The selectivity trend is in excellent agreement with a recent experiment.« less
Thermodynamic aspects of reformulation of automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-09-01
A study of procedures for measuring and predicting the RVP and the initial vapor emissions of reformulated gasoline blends which contain one or more oxygenated compounds, viz., Ethanol, MTBE, ETBE, and TAME is discussed. Two computer simulation methods were programmed and tested. In one method, Method A, the D-86 distillation data on the blend are used for predicting the blend`s RVP from a simulation of the Mini RVPE (RVP Equivalent) experiment. The other method, Method B, relies on analytical information (PIANO analyzes) on the nature of the base gasoline and utilizes classical thermodynamics for simulating the same RVPE, Mini experiment.more » Method B, also, predicts the composition and other properties of the initial vapor emission from the fuel. The results indicate that predictions made with both methods agree very well with experimental values. The predictions with Method B illustrate that the admixture of an oxygenate to a gasoline blend changes the volatility of the blend and, also, the composition of the vapor emission. From the example simulations, a blend with 10 vol % ethanol increases the RVP by about 0.8 psi. The accompanying vapor emission will contain about 15% ethanol. Similarly, the vapor emission of a fuel blend with 11 vol % MTBE was calculated to contain about 11 vol % MTBE. Predictions of the behavior of blends with ETBE and ETBE+Ethanol are also presented and discussed. Recognizing that quite some efforts have been invested in developing empirical correlations for predicting RVP, the writers consider the purpose of this paper to be pointing out that the methods of classical thermodynamics are adequate and that there is a need for additional work in developing certain fundamental data that are still lacking.« less
NASA Astrophysics Data System (ADS)
Andersen, A.; Govind, N.; Laskin, A.
2017-12-01
Mineral surfaces have been implicated as potential protectors of soil organic matter (SOM) against decomposition and ultimate mineralization to small molecules which can provide nutrients for plants and soil microbes and can also contribute to the Earth's elemental cycles. SOM is a complex mixture of organic molecules of biological origin at varying degrees of decomposition and can, itself, self-assemble in such a way as to expose some biomolecule types to biotic and abiotic attack while protecting other biomolecule types. The organization of SOM and SOM with mineral surfaces and solvated metal ions is driven by an interplay of van der Waals and electrostatic interactions leading to partitioning of hydrophilic (e.g. sugars) and hydrophobic (e.g., lipids) SOM components that can be bridged with amphiphilic molecules (e.g., proteins). Classical molecular dynamics simulations can shed light on assemblies of organic molecules alone or complexation with mineral surfaces. The role of chemical reactions is also an important consideration in potential chemical changes of the organic species such as oxidation/reduction, degradation, chemisorption to mineral surfaces, and complexation with solvated metal ions to form organometallic systems. For the study of chemical reactivity, quantum chemistry methods can be employed and combined with structural insight provided by classical MD simulations. Moreover, quantum chemistry can also simulate spectroscopic signatures based on chemical structure and is a valuable tool in interpreting spectra from, notably, x-ray absorption spectroscopy (XAS). In this presentation, we will discuss our classical MD and quantum chemistry findings on a model SOM system interacting with mineral surfaces and solvated metal ions.
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
Lima, Nicola; Caneschi, Andrea; Gatteschi, Dante; Kritikos, Mikael; Westin, L Gunnar
2006-03-20
The susceptibility of the large transition-metal cluster [Mn19O12(MOE)14(MOEH)10].MOEH (MOE = OC2H2O-CH3) has been fitted through classical Monte Carlo simulation, and an estimation of the exchange coupling constants has been done. With these results, it has been possible to perform a full-matrix diagonalization of the cluster core, which was used to provide information on the nature of the low-lying levels.
First-principles simulations of heat transport
NASA Astrophysics Data System (ADS)
Puligheddu, Marcello; Gygi, Francois; Galli, Giulia
2017-11-01
Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.
A Novel DEM Approach to Simulate Block Propagation on Forested Slopes
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Dorren, Luuk; Berger, Frédéric
2018-03-01
In order to model rockfall on forested slopes, we developed a trajectory rockfall model based on the discrete element method (DEM). This model is able to take the complex mechanical processes at work during an impact into account (large deformations, complex contact conditions) and can explicitly simulate block/soil, block/tree contacts as well as contacts between neighbouring trees. In this paper, we describe the DEM model developed and we use it to assess the protective effect of different types of forest. In addition, we compared it with a more classical rockfall simulation model. The results highlight that forests can significantly reduce rockfall hazard and that the spatial structure of coppice forests has to be taken into account in rockfall simulations in order to avoid overestimating the protective role of these forest structures against rockfall hazard. In addition, the protective role of the forests is mainly influenced by the basal area. Finally, the advantages and limitations of the DEM model were compared with classical rockfall modelling approaches.
Transfer of training for aerospace operations: How to measure, validate, and improve it
NASA Technical Reports Server (NTRS)
Cohen, Malcolm M.
1993-01-01
It has been a commonly accepted practice to train pilots and astronauts in expensive, extremely sophisticated, high fidelity simulators, with as much of the real-world feel and response as possible. High fidelity and high validity have often been assumed to be inextricably interwoven, although this assumption may not be warranted. The Project Mercury rate-damping task on the Naval Air Warfare Center's Human Centrifuge Dynamic Flight Simulator, the shuttle landing task on the NASA-ARC Vertical Motion Simulator, and the almost complete acceptance by the airline industry of full-up Boeing 767 flight simulators, are just a few examples of this approach. For obvious reasons, the classical models of transfer of training have never been adequately evaluated in aerospace operations, and there have been few, if any, scientifically valid replacements for the classical models. This paper reviews some of the earlier work involving transfer of training in aerospace operations, and discusses some of the methods by which appropriate criteria for assessing the validity of training may be established.
Plans for wind energy system simulation
NASA Technical Reports Server (NTRS)
Dreier, M. E.
1978-01-01
A digital computer code and a special purpose hybrid computer, were introduced. The digital computer program, the Root Perturbation Method or RPM, is an implementation of the classic floquet procedure which circumvents numerical problems associated with the extraction of Floquet roots. The hybrid computer, the Wind Energy System Time domain simulator (WEST), yields real time loads and deformation information essential to design and system stability investigations.
Power line interference attenuation in multi-channel sEMG signals: Algorithms and analysis.
Soedirdjo, S D H; Ullah, K; Merletti, R
2015-08-01
Electromyogram (EMG) recordings are often corrupted by power line interference (PLI) even though the skin is prepared and well-designed instruments are used. This study focuses on the analysis of some of the recent and classical existing digital signal processing approaches have been used to attenuate, if not eliminate, the power line interference from EMG signals. A comparison of the signal to interference ratio (SIR) of the output signals is presented, for four methods: classical notch filter, spectral interpolation, adaptive noise canceller with phase locked loop (ANC-PLL) and adaptive filter, applied to simulated multichannel monopolar EMG signals with different SIR. The effect of each method on the shape of the EMG signals is also analyzed. The results show that ANC-PLL method gives the best output SIR and lowest shape distortion compared to the other methods. Classical notch filtering is the simplest method but some information might be lost as it removes both the interference and the EMG signals. Thus, it is obvious that notch filter has the lowest performance and it introduces distortion into the resulting signals.
3D Hydrodynamic Simulation of Classical Novae Explosions
NASA Astrophysics Data System (ADS)
Kendrick, Coleman J.
2015-01-01
This project investigates the formation and lifecycle of classical novae and determines how parameters such as: white dwarf mass, star mass and separation affect the evolution of the rotating binary system. These parameters affect the accretion rate, frequency of the nova explosions and light curves. Each particle in the simulation represents a volume of hydrogen gas and are initialized randomly in the outer shell of the companion star. The forces on each particle include: gravity, centrifugal, coriolis, friction, and Langevin. The friction and Langevin forces are used to model the viscosity and internal pressure of the gas. A velocity Verlet method with a one second time step is used to compute velocities and positions of the particles. A new particle recycling method was developed which was critical for computing an accurate and stable accretion rate and keeping the particle count reasonable. I used C++ and OpenCL to create my simulations and ran them on two Nvidia GTX580s. My simulations used up to 1 million particles and required up to 10 hours to complete. My simulation results for novae U Scorpii and DD Circinus are consistent with professional hydrodynamic simulations and observed experimental data (light curves and outburst frequencies). When the white dwarf mass is increased, the time between explosions decreases dramatically. My model was used to make the first prediction for the next outburst of nova DD Circinus. My simulations also show that the companion star blocks the expanding gas shell leading to an asymmetrical expanding shell.
Three dimensional iterative beam propagation method for optical waveguide devices
NASA Astrophysics Data System (ADS)
Ma, Changbao; Van Keuren, Edward
2006-10-01
The finite difference beam propagation method (FD-BPM) is an effective model for simulating a wide range of optical waveguide structures. The classical FD-BPMs are based on the Crank-Nicholson scheme, and in tridiagonal form can be solved using the Thomas method. We present a different type of algorithm for 3-D structures. In this algorithm, the wave equation is formulated into a large sparse matrix equation which can be solved using iterative methods. The simulation window shifting scheme and threshold technique introduced in our earlier work are utilized to overcome the convergence problem of iterative methods for large sparse matrix equation and wide-angle simulations. This method enables us to develop higher-order 3-D wide-angle (WA-) BPMs based on Pade approximant operators and the multistep method, which are commonly used in WA-BPMs for 2-D structures. Simulations using the new methods will be compared to the analytical results to assure its effectiveness and applicability.
Hybrid annealing: Coupling a quantum simulator to a classical computer
NASA Astrophysics Data System (ADS)
Graß, Tobias; Lewenstein, Maciej
2017-05-01
Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.
Quasi-classical approaches to vibronic spectra revisited
NASA Astrophysics Data System (ADS)
Karsten, Sven; Ivanov, Sergei D.; Bokarev, Sergey I.; Kühn, Oliver
2018-03-01
The framework to approach quasi-classical dynamics in the electronic ground state is well established and is based on the Kubo-transformed time correlation function (TCF), being the most classical-like quantum TCF. Here we discuss whether the choice of the Kubo-transformed TCF as a starting point for simulating vibronic spectra is as unambiguous as it is for vibrational ones. Employing imaginary-time path integral techniques in combination with the interaction representation allowed us to formulate a method for simulating vibronic spectra in the adiabatic regime that takes nuclear quantum effects and dynamics on multiple potential energy surfaces into account. Further, a generalized quantum TCF is proposed that contains many well-established TCFs, including the Kubo one, as particular cases. Importantly, it also provides a framework to construct new quantum TCFs. Applying the developed methodology to the generalized TCF leads to a plethora of simulation protocols, which are based on the well-known TCFs as well as on new ones. Their performance is investigated on 1D anharmonic model systems at finite temperatures. It is shown that the protocols based on the new TCFs may lead to superior results with respect to those based on the common ones. The strategies to find the optimal approach are discussed.
Whitley, Heather D.; Scullard, Christian R.; Benedict, Lorin X.; ...
2014-12-04
Here, we present a discussion of kinetic theory treatments of linear electrical and thermal transport in hydrogen plasmas, for a regime of interest to inertial confinement fusion applications. In order to assess the accuracy of one of the more involved of these approaches, classical Lenard-Balescu theory, we perform classical molecular dynamics simulations of hydrogen plasmas using 2-body quantum statistical potentials and compute both electrical and thermal conductivity from out particle trajectories using the Kubo approach. Our classical Lenard-Balescu results employing the identical statistical potentials agree well with the simulations.
Frison, Severine; Kerac, Marko; Checchi, Francesco; Nicholas, Jennifer
2017-01-01
The assessment of the prevalence of acute malnutrition in children under five is widely used for the detection of emergencies, planning interventions, advocacy, and monitoring and evaluation. This study examined PROBIT Methods which convert parameters (mean and standard deviation (SD)) of a normally distributed variable to a cumulative probability below any cut-off to estimate acute malnutrition in children under five using Middle-Upper Arm Circumference (MUAC). We assessed the performance of: PROBIT Method I, with mean MUAC from the survey sample and MUAC SD from a database of previous surveys; and PROBIT Method II, with mean and SD of MUAC observed in the survey sample. Specifically, we generated sub-samples from 852 survey datasets, simulating 100 surveys for eight sample sizes. Overall the methods were tested on 681 600 simulated surveys. PROBIT methods relying on sample sizes as small as 50 had better performance than the classic method for estimating and classifying the prevalence of acute malnutrition. They had better precision in the estimation of acute malnutrition for all sample sizes and better coverage for smaller sample sizes, while having relatively little bias. They classified situations accurately for a threshold of 5% acute malnutrition. Both PROBIT methods had similar outcomes. PROBIT Methods have a clear advantage in the assessment of acute malnutrition prevalence based on MUAC, compared to the classic method. Their use would require much lower sample sizes, thus enable great time and resource savings and permit timely and/or locally relevant prevalence estimates of acute malnutrition for a swift and well-targeted response.
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H
2014-07-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.
Implementation of quantum game theory simulations using Python
NASA Astrophysics Data System (ADS)
Madrid S., A.
2013-05-01
This paper provides some examples about quantum games simulated in Python's programming language. The quantum games have been developed with the Sympy Python library, which permits solving quantum problems in a symbolic form. The application of these methods of quantum mechanics to game theory gives us more possibility to achieve results not possible before. To illustrate the results of these methods, in particular, there have been simulated the quantum battle of the sexes, the prisoner's dilemma and card games. These solutions are able to exceed the classic bottle neck and obtain optimal quantum strategies. In this form, python demonstrated that is possible to do more advanced and complicated quantum games algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi
The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.
Aeroacoustics Computation for Nearly Fully Expanded Supersonic Jets Using the CE/SE Method
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Hultgren, Lennart S.; Wang, Xiao Y.; Chang, Sin-Chung; Jorgenson, Philip C. E.
2000-01-01
In this paper, the space-time conservation element solution element (CE/SE) method is tested in the classical axisymmetric jet instability problem, rendering good agreement with the linear theory. The CE/SE method is then applied to numerical simulations of several nearly fully expanded axisymmetric jet flows and their noise fields and qualitative agreement with available experimental and theoretical results is demonstrated.
Classical molecular dynamics simulation of electronically non-adiabatic processes.
Miller, William H; Cotton, Stephen J
2016-12-22
Both classical and quantum mechanics (as well as hybrids thereof, i.e., semiclassical approaches) find widespread use in simulating dynamical processes in molecular systems. For large chemical systems, however, which involve potential energy surfaces (PES) of general/arbitrary form, it is usually the case that only classical molecular dynamics (MD) approaches are feasible, and their use is thus ubiquitous nowadays, at least for chemical processes involving dynamics on a single PES (i.e., within a single Born-Oppenheimer electronic state). This paper reviews recent developments in an approach which extends standard classical MD methods to the treatment of electronically non-adiabatic processes, i.e., those that involve transitions between different electronic states. The approach treats nuclear and electronic degrees of freedom (DOF) equivalently (i.e., by classical mechanics, thereby retaining the simplicity of standard MD), and provides "quantization" of the electronic states through a symmetrical quasi-classical (SQC) windowing model. The approach is seen to be capable of treating extreme regimes of strong and weak coupling between the electronic states, as well as accurately describing coherence effects in the electronic DOF (including the de-coherence of such effects caused by coupling to the nuclear DOF). A survey of recent applications is presented to illustrate the performance of the approach. Also described is a newly developed variation on the original SQC model (found universally superior to the original) and a general extension of the SQC model to obtain the full electronic density matrix (at no additional cost/complexity).
Thomson, R; Kawrakow, I
2012-06-01
Widely-used classical trajectory Monte Carlo simulations of low energy electron transport neglect the quantum nature of electrons; however, at sub-1 keV energies quantum effects have the potential to become significant. This work compares quantum and classical simulations within a simplified model of electron transport in water. Electron transport is modeled in water droplets using quantum mechanical (QM) and classical trajectory Monte Carlo (MC) methods. Water droplets are modeled as collections of point scatterers representing water molecules from which electrons may be isotropically scattered. The role of inelastic scattering is investigated by introducing absorption. QM calculations involve numerically solving a system of coupled equations for the electron wavefield incident on each scatterer. A minimum distance between scatterers is introduced to approximate structured water. The average QM water droplet incoherent cross section is compared with the MC cross section; a relative error (RE) on the MC results is computed. RE varies with electron energy, average and minimum distances between scatterers, and scattering amplitude. The mean free path is generally the relevant length scale for estimating RE. The introduction of a minimum distance between scatterers increases RE substantially (factors of 5 to 10), suggesting that the structure of water must be modeled for accurate simulations. Inelastic scattering does not improve agreement between QM and MC simulations: for the same magnitude of elastic scattering, the introduction of inelastic scattering increases RE. Droplet cross sections are sensitive to droplet size and shape; considerable variations in RE are observed with changing droplet size and shape. At sub-1 keV energies, quantum effects may become non-negligible for electron transport in condensed media. Electron transport is strongly affected by the structure of the medium. Inelastic scatter does not improve agreement between QM and MC simulations of low energy electron transport in condensed media. © 2012 American Association of Physicists in Medicine.
Megyes, Tünde; Bálint, Szabolcs; Grósz, Tamás; Radnai, Tamás; Bakó, Imre; Sipos, Pál
2008-01-28
To determine the structure of aqueous sodium hydroxide solutions, results obtained from x-ray diffraction and computer simulation (molecular dynamics and Car-Parrinello) have been compared. The capabilities and limitations of the methods in describing the solution structure are discussed. For the solutions studied, diffraction methods were found to perform very well in describing the hydration spheres of the sodium ion and yield structural information on the anion's hydration structure. Classical molecular dynamics simulations were not able to correctly describe the bulk structure of these solutions. However, Car-Parrinello simulation proved to be a suitable tool in the detailed interpretation of the hydration sphere of ions and bulk structure of solutions. The results of Car-Parrinello simulations were compared with the findings of diffraction experiments.
Epistemic View of Quantum States and Communication Complexity of Quantum Channels
NASA Astrophysics Data System (ADS)
Montina, Alberto
2012-09-01
The communication complexity of a quantum channel is the minimal amount of classical communication required for classically simulating a process of state preparation, transmission through the channel and subsequent measurement. It establishes a limit on the power of quantum communication in terms of classical resources. We show that classical simulations employing a finite amount of communication can be derived from a special class of hidden variable theories where quantum states represent statistical knowledge about the classical state and not an element of reality. This special class has attracted strong interest very recently. The communication cost of each derived simulation is given by the mutual information between the quantum state and the classical state of the parent hidden variable theory. Finally, we find that the communication complexity for single qubits is smaller than 1.28 bits. The previous known upper bound was 1.85 bits.
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures.
Kuriata, Aleksander; Gierut, Aleksandra Maria; Oleniecki, Tymoteusz; Ciemny, Maciej Pawel; Kolinski, Andrzej; Kurcinski, Mateusz; Kmiecik, Sebastian
2018-05-14
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2.
NASA Astrophysics Data System (ADS)
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-03-01
The reaction ensemble and the constant pH method are well-known chemical equilibrium approaches to simulate protonation and deprotonation reactions in classical molecular dynamics and Monte Carlo simulations. In this article, we demonstrate the similarity between both methods under certain conditions. We perform molecular dynamics simulations of a weak polyelectrolyte in order to compare the titration curves obtained by both approaches. Our findings reveal a good agreement between the methods when the reaction ensemble is used to sweep the reaction constant. Pronounced differences between the reaction ensemble and the constant pH method can be observed for stronger acids and bases in terms of adaptive pH values. These deviations are due to the presence of explicit protons in the reaction ensemble method which induce a screening of electrostatic interactions between the charged titrable groups of the polyelectrolyte. The outcomes of our simulation hint to a better applicability of the reaction ensemble method for systems in confined geometries and titrable groups in polyelectrolytes with different pKa values.
Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms
NASA Astrophysics Data System (ADS)
Johansson, Niklas; Larsson, Jan-Åke
2017-09-01
A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.
A Nonlocal Peridynamic Plasticity Model for the Dynamic Flow and Fracture of Concrete.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogler, Tracy; Lammi, Christopher James
A nonlocal, ordinary peridynamic constitutive model is formulated to numerically simulate the pressure-dependent flow and fracture of heterogeneous, quasi-brittle ma- terials, such as concrete. Classical mechanics and traditional computational modeling methods do not accurately model the distributed fracture observed within this family of materials. The peridynamic horizon, or range of influence, provides a characteristic length to the continuum and limits localization of fracture. Scaling laws are derived to relate the parameters of peridynamic constitutive model to the parameters of the classical Drucker-Prager plasticity model. Thermodynamic analysis of associated and non-associated plastic flow is performed. An implicit integration algorithm is formu-more » lated to calculate the accumulated plastic bond extension and force state. The gov- erning equations are linearized and the simulation of the quasi-static compression of a cylinder is compared to the classical theory. A dissipation-based peridynamic bond failure criteria is implemented to model fracture and the splitting of a concrete cylinder is numerically simulated. Finally, calculation of the impact and spallation of a con- crete structure is performed to assess the suitability of the material and failure models for simulating concrete during dynamic loadings. The peridynamic model is found to accurately simulate the inelastic deformation and fracture behavior of concrete during compression, splitting, and dynamically induced spall. The work expands the types of materials that can be modeled using peridynamics. A multi-scale methodology for simulating concrete to be used in conjunction with the plasticity model is presented. The work was funded by LDRD 158806.« less
Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice
NASA Astrophysics Data System (ADS)
Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko
2018-03-01
Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
NASA Astrophysics Data System (ADS)
Fink, G.; Koch, M.
2010-12-01
An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.
Shiraishi, Emi; Maeda, Kazuhiro; Kurata, Hiroyuki
2009-02-01
Numerical simulation of differential equation systems plays a major role in the understanding of how metabolic network models generate particular cellular functions. On the other hand, the classical and technical problems for stiff differential equations still remain to be solved, while many elegant algorithms have been presented. To relax the stiffness problem, we propose new practical methods: the gradual update of differential-algebraic equations based on gradual application of the steady-state approximation to stiff differential equations, and the gradual update of the initial values in differential-algebraic equations. These empirical methods show a high efficiency for simulating the steady-state solutions for the stiff differential equations that existing solvers alone cannot solve. They are effective in extending the applicability of dynamic simulation to biochemical network models.
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography.
Ben Bouallègue, F; Crouzet, J F; Mariano-Goulart, D
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for MLEM reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the GATE platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time.
The Computer Simulation of Liquids by Molecular Dynamics.
ERIC Educational Resources Information Center
Smith, W.
1987-01-01
Proposes a mathematical computer model for the behavior of liquids using the classical dynamic principles of Sir Isaac Newton and the molecular dynamics method invented by other scientists. Concludes that other applications will be successful using supercomputers to go beyond simple Newtonian physics. (CW)
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Banerjee, D; Dalmonte, M; Müller, M; Rico, E; Stebler, P; Wiese, U-J; Zoller, P
2012-10-26
Using a Fermi-Bose mixture of ultracold atoms in an optical lattice, we construct a quantum simulator for a U(1) gauge theory coupled to fermionic matter. The construction is based on quantum links which realize continuous gauge symmetry with discrete quantum variables. At low energies, quantum link models with staggered fermions emerge from a Hubbard-type model which can be quantum simulated. This allows us to investigate string breaking as well as the real-time evolution after a quench in gauge theories, which are inaccessible to classical simulation methods.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less
Nuclear quantum effects and kinetic isotope effects in enzyme reactions.
Vardi-Kilshtain, Alexandra; Nitoker, Neta; Major, Dan Thomas
2015-09-15
Enzymes are extraordinarily effective catalysts evolved to perform well-defined and highly specific chemical transformations. Studying the nature of rate enhancements and the mechanistic strategies in enzymes is very important, both from a basic scientific point of view, as well as in order to improve rational design of biomimetics. Kinetic isotope effect (KIE) is a very important tool in the study of chemical reactions and has been used extensively in the field of enzymology. Theoretically, the prediction of KIEs in condensed phase environments such as enzymes is challenging due to the need to include nuclear quantum effects (NQEs). Herein we describe recent progress in our group in the development of multi-scale simulation methods for the calculation of NQEs and accurate computation of KIEs. We also describe their application to several enzyme systems. In particular we describe the use of combined quantum mechanics/molecular mechanics (QM/MM) methods in classical and quantum simulations. The development of various novel path-integral methods is reviewed. These methods are tailor suited to enzyme systems, where only a few degrees of freedom involved in the chemistry need to be quantized. The application of the hybrid QM/MM quantum-classical simulation approach to three case studies is presented. The first case involves the proton transfer in alanine racemase. The second case presented involves orotidine 5'-monophosphate decarboxylase where multidimensional free energy simulations together with kinetic isotope effects are combined in the study of the reaction mechanism. Finally, we discuss the proton transfer in nitroalkane oxidase, where the enzyme employs tunneling as a catalytic fine-tuning tool. Copyright © 2015 Elsevier Inc. All rights reserved.
Thermodynamic properties for applications in chemical industry via classical force fields.
Guevara-Carrion, Gabriela; Hasse, Hans; Vrabec, Jadran
2012-01-01
Thermodynamic properties of fluids are of key importance for the chemical industry. Presently, the fluid property models used in process design and optimization are mostly equations of state or G (E) models, which are parameterized using experimental data. Molecular modeling and simulation based on classical force fields is a promising alternative route, which in many cases reasonably complements the well established methods. This chapter gives an introduction to the state-of-the-art in this field regarding molecular models, simulation methods, and tools. Attention is given to the way modeling and simulation on the scale of molecular force fields interact with other scales, which is mainly by parameter inheritance. Parameters for molecular force fields are determined both bottom-up from quantum chemistry and top-down from experimental data. Commonly used functional forms for describing the intra- and intermolecular interactions are presented. Several approaches for ab initio to empirical force field parameterization are discussed. Some transferable force field families, which are frequently used in chemical engineering applications, are described. Furthermore, some examples of force fields that were parameterized for specific molecules are given. Molecular dynamics and Monte Carlo methods for the calculation of transport properties and vapor-liquid equilibria are introduced. Two case studies are presented. First, using liquid ammonia as an example, the capabilities of semi-empirical force fields, parameterized on the basis of quantum chemical information and experimental data, are discussed with respect to thermodynamic properties that are relevant for the chemical industry. Second, the ability of molecular simulation methods to describe accurately vapor-liquid equilibrium properties of binary mixtures containing CO(2) is shown.
Geochemical Reaction Mechanism Discovery from Molecular Simulation
Stack, Andrew G.; Kent, Paul R. C.
2014-11-10
Methods to explore reactions using computer simulation are becoming increasingly quantitative, versatile, and robust. In this review, a rationale for how molecular simulation can help build better geochemical kinetics models is first given. We summarize some common methods that geochemists use to simulate reaction mechanisms, specifically classical molecular dynamics and quantum chemical methods and discuss their strengths and weaknesses. Useful tools such as umbrella sampling and metadynamics that enable one to explore reactions are discussed. Several case studies wherein geochemists have used these tools to understand reaction mechanisms are presented, including water exchange and sorption on aqueous species and mineralmore » surfaces, surface charging, crystal growth and dissolution, and electron transfer. The impact that molecular simulation has had on our understanding of geochemical reactivity are highlighted in each case. In the future, it is anticipated that molecular simulation of geochemical reaction mechanisms will become more commonplace as a tool to validate and interpret experimental data, and provide a check on the plausibility of geochemical kinetic models.« less
Non-classical nuclei and growth kinetics of Cr precipitates in FeCr alloys during ageing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yulan; Hu, Shenyang Y.; Zhang, Lei
2014-01-10
In this manuscript, we quantitatively calculated the thermodynamic properties of critical nuclei of Cr precipitates in FeCr alloys. The concentration profiles of the critical nuclei and nucleation energy barriers were predicted by the constrained shrinking dimer dynamics (CSDD) method. It is found that Cr concentration distribution in the critical nuclei strongly depend on the overall Cr concentration as well as temperature. The critical nuclei are non-classical because the concentration in the nuclei is smaller than the thermodynamic equilibrium value. These results are in agreement with atomic probe observation. The growth kinetics of both classical and non-classical nuclei was investigated bymore » the phase field approach. The simulations of critical nucleus evolution showed a number of interesting phenomena: 1) a critical classical nucleus first shrinks toward its non-classical nucleus and then grows; 2) a non-classical nucleus has much slower growth kinetics at its earlier growth stage compared to the diffusion-controlled growth kinetics. 3) a critical classical nucleus grows faster at the earlier growth stage than the non-classical nucleus. All of these results demonstrate that it is critical to introduce the correct critical nuclei in order to correctly capture the kinetics of precipitation.« less
NASA Astrophysics Data System (ADS)
John, Christopher; Spura, Thomas; Habershon, Scott; Kühne, Thomas D.
2016-04-01
We present a simple and accurate computational method which facilitates ab initio path-integral molecular dynamics simulations, where the quantum-mechanical nature of the nuclei is explicitly taken into account, at essentially no additional computational cost in comparison to the corresponding calculation using classical nuclei. The predictive power of the proposed quantum ring-polymer contraction method is demonstrated by computing various static and dynamic properties of liquid water at ambient conditions using density functional theory. This development will enable routine inclusion of nuclear quantum effects in ab initio molecular dynamics simulations of condensed-phase systems.
Self-learning Monte Carlo method
Liu, Junwei; Qi, Yang; Meng, Zi Yang; ...
2017-01-04
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of a general and efficient update algorithm for large size systems close to the phase transition, for which local updates perform badly. In this Rapid Communication, we propose a general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. Lastly, we demonstrate the efficiency of SLMC in a spin model at the phasemore » transition point, achieving a 10–20 times speedup.« less
Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators
NASA Technical Reports Server (NTRS)
Fantini, Jay A.
1998-01-01
Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.
One-dimensional stitching interferometry assisted by a triple-beam interferometer
Xue, Junpeng; Huang, Lei; Gao, Bo; ...
2017-04-13
In this work, we proposed for stitching interferometry to use a triple-beam interferometer to measure both the distance and the tilt for all sub-apertures before the stitching process. The relative piston between two neighboring sub-apertures is then calculated by using the data in the overlapping area. Comparisons are made between our method, and the classical least-squares principle stitching method. Our method can improve the accuracy and repeatability of the classical stitching method when a large number of sub-aperture topographies are taken into account. Our simulations and experiments on flat and spherical mirrors indicate that our proposed method can decrease themore » influence of the interferometer error from the stitched result. The comparison of stitching system with Fizeau interferometry data is about 2 nm root mean squares and the repeatability is within ± 2.5 nm peak to valley.« less
A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
Direct simulation Monte Carlo method for the Uehling-Uhlenbeck-Boltzmann equation.
Garcia, Alejandro L; Wagner, Wolfgang
2003-11-01
In this paper we describe a direct simulation Monte Carlo algorithm for the Uehling-Uhlenbeck-Boltzmann equation in terms of Markov processes. This provides a unifying framework for both the classical Boltzmann case as well as the Fermi-Dirac and Bose-Einstein cases. We establish the foundation of the algorithm by demonstrating its link to the kinetic equation. By numerical experiments we study its sensitivity to the number of simulation particles and to the discretization of the velocity space, when approximating the steady-state distribution.
Polynomial-time quantum algorithm for the simulation of chemical dynamics
Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán
2008-01-01
The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207
Parametric models to compute tryptophan fluorescence wavelengths from classical protein simulations.
Lopez, Alvaro J; Martínez, Leandro
2018-02-26
Fluorescence spectroscopy is an important method to study protein conformational dynamics and solvation structures. Tryptophan (Trp) residues are the most important and practical intrinsic probes for protein fluorescence due to the variability of their fluorescence wavelengths: Trp residues emit in wavelengths ranging from 308 to 360 nm depending on the local molecular environment. Fluorescence involves electronic transitions, thus its computational modeling is a challenging task. We show that it is possible to predict the wavelength of emission of a Trp residue from classical molecular dynamics simulations by computing the solvent-accessible surface area or the electrostatic interaction between the indole group and the rest of the system. Linear parametric models are obtained to predict the maximum emission wavelengths with standard errors of the order 5 nm. In a set of 19 proteins with emission wavelengths ranging from 308 to 352 nm, the best model predicts the maximum wavelength of emission with a standard error of 4.89 nm and a quadratic Pearson correlation coefficient of 0.81. These models can be used for the interpretation of fluorescence spectra of proteins with multiple Trp residues, or for which local Trp environmental variability exists and can be probed by classical molecular dynamics simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
In vitro dynamic model simulating the digestive tract of 6-month-old infants.
Passannanti, Francesca; Nigro, Federica; Gallo, Marianna; Tornatore, Fabio; Frasso, Annalisa; Saccone, Giulia; Budelli, Andrea; Barone, Maria V; Nigro, Roberto
2017-01-01
In vivo assays cannot always be conducted because of ethical reasons, technical constraints or costs, but a better understanding of the digestive process, especially in infants, could be of great help in preventing food-related pathologies and in developing new formulas with health benefits. In this context, in vitro dynamic systems to simulate human digestion and, in particular, infant digestion could become increasingly valuable. To simulate the digestive process through the use of a dynamic model of the infant gastroenteric apparatus to study the digestibility of starch-based infant foods. Using M.I.D.A (Model of an Infant Digestive Apparatus), the oral, gastric and intestinal digestibility of two starch-based products were measured: 1) rice starch mixed with distilled water and treated using two different sterilization methods (the classical method with a holding temperature of 121°C for 37 min and the HTST method with a holding temperature of 137°C for 70 sec) and 2) a rice cream with (premium product) or without (basic product) an aliquot of rice flour fermented by Lactobacillus paracasei CBA L74. After the digestion the foods were analyzed for the starch concentration, the amount of D-glucose released and the percentage of hydrolyzed starch. An in vitro dynamic system, which was referred to as M.I.D.A., was obtained. Using this system, the starch digestion occurred only during the oral and intestinal phase, as expected. The D-glucose released during the intestinal phase was different between the classical and HTST methods (0.795 grams for the HTST versus 0.512 for the classical product). The same analysis was performed for the basic and premium products. In this case, the premium product had a significant difference in terms of the starch hydrolysis percentage during the entire process. The M.I.D.A. system was able to digest simple starches and a more complex food in the correct compartments. In this study, better digestibility of the premium product was revealed.
A particle-particle hybrid method for kinetic and continuum equations
NASA Astrophysics Data System (ADS)
Tiwari, Sudarshan; Klar, Axel; Hardt, Steffen
2009-10-01
We present a coupling procedure for two different types of particle methods for the Boltzmann and the Navier-Stokes equations. A variant of the DSMC method is applied to simulate the Boltzmann equation, whereas a meshfree Lagrangian particle method, similar to the SPH method, is used for simulations of the Navier-Stokes equations. An automatic domain decomposition approach is used with the help of a continuum breakdown criterion. We apply adaptive spatial and time meshes. The classical Sod's 1D shock tube problem is solved for a large range of Knudsen numbers. Results from Boltzmann, Navier-Stokes and hybrid solvers are compared. The CPU time for the hybrid solver is 3-4 times faster than for the Boltzmann solver.
Kreula, J. M.; Clark, S. R.; Jaksch, D.
2016-01-01
We propose a non-linear, hybrid quantum-classical scheme for simulating non-equilibrium dynamics of strongly correlated fermions described by the Hubbard model in a Bethe lattice in the thermodynamic limit. Our scheme implements non-equilibrium dynamical mean field theory (DMFT) and uses a digital quantum simulator to solve a quantum impurity problem whose parameters are iterated to self-consistency via a classically computed feedback loop where quantum gate errors can be partly accounted for. We analyse the performance of the scheme in an example case. PMID:27609673
Organizational Agility Model and Simulation
2011-06-01
and response profile. Also, compensatory, anticipatory , adaptive, and learning behaviours (methods) are employed to modify stiffness and resistance...The hypothetical profile in Figure 1b shows some complexity changes for a major sporting event or...classical motion tracking problem using compensatory, anticipatory , adaptive, and learning behaviours. These behaviours modify the size, resistance, and
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
Understanding Cryptic Pocket Formation in Protein Targets by Enhanced Sampling Simulations.
Oleinikovas, Vladimiras; Saladino, Giorgio; Cossins, Benjamin P; Gervasio, Francesco L
2016-11-02
Cryptic pockets, that is, sites on protein targets that only become apparent when drugs bind, provide a promising alternative to classical binding sites for drug development. Here, we investigate the nature and dynamical properties of cryptic sites in four pharmacologically relevant targets, while comparing the efficacy of various simulation-based approaches in discovering them. We find that the studied cryptic sites do not correspond to local minima in the computed conformational free energy landscape of the unliganded proteins. They thus promptly close in all of the molecular dynamics simulations performed, irrespective of the force-field used. Temperature-based enhanced sampling approaches, such as Parallel Tempering, do not improve the situation, as the entropic term does not help in the opening of the sites. The use of fragment probes helps, as in long simulations occasionally it leads to the opening and binding to the cryptic sites. Our observed mechanism of cryptic site formation is suggestive of an interplay between two classical mechanisms: induced-fit and conformational selection. Employing this insight, we developed a novel Hamiltonian Replica Exchange-based method "SWISH" (Sampling Water Interfaces through Scaled Hamiltonians), which combined with probes resulted in a promising general approach for cryptic site discovery. We also addressed the issue of "false-positives" and propose a simple approach to distinguish them from druggable cryptic pockets. Our simulations, whose cumulative sampling time was more than 200 μs, help in clarifying the molecular mechanism of pocket formation, providing a solid basis for the choice of an efficient computational method.
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
Exact and efficient simulation of concordant computation
NASA Astrophysics Data System (ADS)
Cable, Hugo; Browne, Daniel E.
2015-11-01
Concordant computation is a circuit-based model of quantum computation for mixed states, that assumes that all correlations within the register are discord-free (i.e. the correlations are essentially classical) at every step of the computation. The question of whether concordant computation always admits efficient simulation by a classical computer was first considered by Eastin in arXiv:quant-ph/1006.4402v1, where an answer in the affirmative was given for circuits consisting only of one- and two-qubit gates. Building on this work, we develop the theory of classical simulation of concordant computation. We present a new framework for understanding such computations, argue that a larger class of concordant computations admit efficient simulation, and provide alternative proofs for the main results of arXiv:quant-ph/1006.4402v1 with an emphasis on the exactness of simulation which is crucial for this model. We include detailed analysis of the arithmetic complexity for solving equations in the simulation, as well as extensions to larger gates and qudits. We explore the limitations of our approach, and discuss the challenges faced in developing efficient classical simulation algorithms for all concordant computations.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
Time-Domain Stability Margin Assessment
NASA Technical Reports Server (NTRS)
Clements, Keith
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Coupled discrete element and finite volume solution of two classical soil mechanics problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Feng; Drumm, Eric; Guiochon, Georges A
One dimensional solutions for the classic critical upward seepage gradient/quick condition and the time rate of consolidation problems are obtained using coupled routines for the finite volume method (FVM) and discrete element method (DEM), and the results compared with the analytical solutions. The two phase flow in a system composed of fluid and solid is simulated with the fluid phase modeled by solving the averaged Navier-Stokes equation using the FVM and the solid phase is modeled using the DEM. A framework is described for the coupling of two open source computer codes: YADE-OpenDEM for the discrete element method and OpenFOAMmore » for the computational fluid dynamics. The particle-fluid interaction is quantified using a semi-empirical relationship proposed by Ergun [12]. The two classical verification problems are used to explore issues encountered when using coupled flow DEM codes, namely, the appropriate time step size for both the fluid and mechanical solution processes, the choice of the viscous damping coefficient, and the number of solid particles per finite fluid volume.« less
Rubin, Jacob
1992-01-01
The feed forward (FF) method derives efficient operational equations for simulating transport of reacting solutes. It has been shown to be applicable in the presence of networks with any number of homogeneous and/or heterogeneous, classical reaction segments that consist of three, at most binary participants. Using a sequential (network type after network type) exploration approach and, independently, theoretical explanations, it is demonstrated for networks with classical reaction segments containing more than three, at most binary participants that if any one of such networks leads to a solvable transport problem then the FF method is applicable. Ways of helping to avoid networks that produce problem insolvability are developed and demonstrated. A previously suggested algebraic, matrix rank procedure has been adapted and augmented to serve as the main, easy-to-apply solvability test for already postulated networks. Four network conditions that often generate insolvability have been identified and studied. Their early detection during network formulation may help to avoid postulation of insolvable networks.
Abdelli, Radia; Rekioua, Djamila; Rekioua, Toufik; Tounzi, Abdelmounaïm
2013-07-01
This paper presents a modulated hysteresis direct torque control (MHDTC) applied to an induction generator (IG) used in wind energy conversion systems (WECs) connected to the electrical grid through a back-to-back converter. The principle of this strategy consists in superposing to the torque reference a triangular signal, as in the PWM strategy, with the desired switching frequency. This new modulated reference is compared to the estimated torque by using a hysteresis controller as in the classical direct torque control (DTC). The aim of this new approach is to lead to a constant frequency and low THD in grid current with a unit power factor and a minimum voltage variation despite the wind variation. To highlight the effectiveness of the proposed method, a comparison was made with classical DTC and field oriented control method (FOC). The obtained simulation results, with a variable wind profile, show an adequate dynamic of the conversion system using the proposed method compared to the classical approaches. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hansson, Tony
1999-08-01
An inexpensive semiclassical method to simulate time-resolved pump-probe spectroscopy on molecular wave packets is applied to NaK molecules at high temperature. The method builds on the introduction of classical phase factors related to the r-centroids for vibronic transitions and assumes instantaneous laser-molecule interaction. All observed quantum mechanical features are reproduced - for short times where experimental data are available even quantitatively. Furthermore, it is shown that fully quantum dynamical molecular wave packet calculations on molecules at elevated temperatures, which do not include all rovibrational states, must be regarded with caution, as they easily might yield even qualitatively incorrect results.
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
NASA Astrophysics Data System (ADS)
Langenbach, K.; Heilig, M.; Horsch, M.; Hasse, H.
2018-03-01
A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO2). The molecular model of CO2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.
Langenbach, K; Heilig, M; Horsch, M; Hasse, H
2018-03-28
A new method for predicting homogeneous bubble nucleation rates of pure compounds from vapor-liquid equilibrium (VLE) data is presented. It combines molecular dynamics simulation on the one side with density gradient theory using an equation of state (EOS) on the other. The new method is applied here to predict bubble nucleation rates in metastable liquid carbon dioxide (CO 2 ). The molecular model of CO 2 is taken from previous work of our group. PC-SAFT is used as an EOS. The consistency between the molecular model and the EOS is achieved by adjusting the PC-SAFT parameters to VLE data obtained from the molecular model. The influence parameter of density gradient theory is fitted to the surface tension of the molecular model. Massively parallel molecular dynamics simulations are performed close to the spinodal to compute bubble nucleation rates. From these simulations, the kinetic prefactor of the hybrid nucleation theory is estimated, whereas the nucleation barrier is calculated from density gradient theory. This enables the extrapolation of molecular simulation data to the whole metastable range including technically relevant densities. The results are tested against available experimental data and found to be in good agreement. The new method does not suffer from typical deficiencies of classical nucleation theory concerning the thermodynamic barrier at the spinodal and the bubble size dependence of surface tension, which is typically neglected in classical nucleation theory. In addition, the density in the center of critical bubbles and their surface tension is determined as a function of their radius. The usual linear Tolman correction to the capillarity approximation is found to be invalid.
Communication cost of simulating Bell correlations.
Toner, B F; Bacon, D
2003-10-31
What classical resources are required to simulate quantum correlations? For the simplest and most important case of local projective measurements on an entangled Bell pair state, we show that exact simulation is possible using local hidden variables augmented by just one bit of classical communication. Certain quantum teleportation experiments, which teleport a single qubit, therefore admit a local hidden variables model.
Predicting chaos in memristive oscillator via harmonic balance method.
Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai
2012-12-01
This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.
A Very Fast and Angular Momentum Conserving Tree Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcello, Dominic C., E-mail: dmarce504@gmail.com
There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.
On the Reduction of Molecular Degrees of Freedom in Computer Simulations
NASA Astrophysics Data System (ADS)
Lyubartsev, Alexander P.; Laaksonen, Aatto
Molecular simulations, based on atomistic force fields are a standard theoretical tool in materials, polymers and biosciences. While various methods, with quantum chemistry incorporated, have been developed for condensed phase simulations during the last decade, there is another line of development with the purpose to bridge the time and length scales based on coarse-graining. This is expected to lead to some very interesting breakthroughs in the near future. In this lecture we will first give some background to common atomistic force fields. After that, we review a few common simple techniques for reducing the number of motional degrees of freedom to speed up the simulations. Finally, we present a powerful method for reducing uninteresting degrees of freedom. This is done by solving the Inverse Problem to obtain the interaction potentials. More precisely, we make use of the radial distribution functions, and by using the method of Inverse Monte Carlo [Lyubartsev & Laaksonen, Phys. Rev. E. 52, 3730 (1995)], we can construct effective potentials which are consistent with the original RDFs. This makes it possible to simulate much larger system than would have been possible by using atomistic force fields. We present many examples: How to simulate aqueous electrolyte solutions without any water molecules but still having the hydration structure around the ions - at the speed of a primitive electrolyte model calculation. We demonstrate how a coarse-grained model can be constructed for a double-helix DNA and how it can be used. It is accurate enough to reproduce the experimental results for ion condensation around DNA for several different counterions. We also show how we can construct site-site potentials for large-scale atomistic classical simulations of arbitrary liquids from smaller scale ab initio simulations. This methodology allows us to start from a simulation with the electrons and atomic nuclei, to construct a set of atomistic effective interaction potentials, and to use them in classical simulations. As a next step we can construct a new set of potentials beyond the atomistic description and carry out mesoscopic simulations, for example by using Dissipative Particle Dynamics. In this way we can tie together three different levels of description. The Dissipative Particle Dynamics method appears as a very promising tool to use with our coarse-grained potentials.
Zhang, Yong; Shi, Chaojun; Brennecke, Joan F; Maginn, Edward J
2014-06-12
A combined classical molecular dynamics (MD) and ab initio MD (AIMD) method was developed for the calculation of electrochemical windows (ECWs) of ionic liquids. In the method, the liquid phase of ionic liquid is explicitly sampled using classical MD. The electrochemical window, estimated by the energy difference between the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO), is calculated at the density functional theory (DFT) level based on snapshots obtained from classical MD trajectories. The snapshots were relaxed using AIMD and quenched to their local energy minima, which assures that the HOMO/LUMO calculations are based on stable configurations on the same potential energy surface. The new procedure was applied to a group of ionic liquids for which the ECWs were also experimentally measured in a self-consistent manner. It was found that the predicted ECWs not only agree with the experimental trend very well but also the values are quantitatively accurate. The proposed method provides an efficient way to compare ECWs of ionic liquids in the same context, which has been difficult in experiments or simulation due to the fact that ECW values sensitively depend on experimental setup and conditions.
Tomography and generative training with quantum Boltzmann machines
NASA Astrophysics Data System (ADS)
Kieferová, Mária; Wiebe, Nathan
2017-12-01
The promise of quantum neural nets, which utilize quantum effects to model complex data sets, has made their development an aspirational goal for quantum machine learning and quantum computing in general. Here we provide methods of training quantum Boltzmann machines. Our work generalizes existing methods and provides additional approaches for training quantum neural networks that compare favorably to existing methods. We further demonstrate that quantum Boltzmann machines enable a form of partial quantum state tomography that further provides a generative model for the input quantum state. Classical Boltzmann machines are incapable of this. This verifies the long-conjectured connection between tomography and quantum machine learning. Finally, we prove that classical computers cannot simulate our training process in general unless BQP=BPP , provide lower bounds on the complexity of the training procedures and numerically investigate training for small nonstoquastic Hamiltonians.
Psychodrama: group psychotherapy through role playing.
Kipper, D A
1992-10-01
The theory and the therapeutic procedure of classical psychodrama are described along with brief illustrations. Classical psychodrama and sociodrama stemmed from role theory, enactments, "tele," the reciprocity of choices, and the theory of spontaneity-robopathy and creativity. The discussion focuses on key concepts such as the therapeutic team, the structure of the session, transference and reality, countertransference, the here-and-now and the encounter, the group-as-a-whole, resistance and difficult clients, and affect and cognition. Also described are the neoclassical approaches of psychodrama, action methods, and clinical role playing, and the significance of the concept of behavioral simulation in group psychotherapy.
Stochastic solution to quantum dynamics
NASA Technical Reports Server (NTRS)
John, Sarah; Wilson, John W.
1994-01-01
The quantum Liouville equation in the Wigner representation is solved numerically by using Monte Carlo methods. For incremental time steps, the propagation is implemented as a classical evolution in phase space modified by a quantum correction. The correction, which is a momentum jump function, is simulated in the quasi-classical approximation via a stochastic process. The technique, which is developed and validated in two- and three- dimensional momentum space, extends an earlier one-dimensional work. Also, by developing a new algorithm, the application to bound state motion in an anharmonic quartic potential shows better agreement with exact solutions in two-dimensional phase space.
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A.; Frankel, Steven H.
2014-01-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, “Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow,” J. Fluid Mech., 582, pp. 253–280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, “Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method,” J. Comput. Phys., 227(13), pp. 6660–6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, “General Circulation Experiments With the Primitive Equations,” Mon. Weather Rev., 91(10), pp. 99–164), recently developed Vreman model (Vreman, 2004, “An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications,” Phys. Fluids, 16(10), pp. 3670–3681), and the Sigma model (Nicoud et al., 2011, “Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations,” Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) (“OpenFOAM,” http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo. PMID:24801556
NASA Astrophysics Data System (ADS)
Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.
2017-12-01
Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spellings, Matthew; Biointerfaces Institute, University of Michigan, 2800 Plymouth Rd., Ann Arbor, MI 48109; Marson, Ryan L.
Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method ismore » a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.« less
Efficient free energy calculations of quantum systems through computer simulations
NASA Astrophysics Data System (ADS)
Antonelli, Alex; Ramirez, Rafael; Herrero, Carlos; Hernandez, Eduardo
2009-03-01
In general, the classical limit is assumed in computer simulation calculations of free energy. This approximation, however, is not justifiable for a class of systems in which quantum contributions for the free energy cannot be neglected. The inclusion of quantum effects is important for the determination of reliable phase diagrams of these systems. In this work, we present a new methodology to compute the free energy of many-body quantum systems [1]. This methodology results from the combination of the path integral formulation of statistical mechanics and efficient non-equilibrium methods to estimate free energy, namely, the adiabatic switching and reversible scaling methods. A quantum Einstein crystal is used as a model to show the accuracy and reliability the methodology. This new method is applied to the calculation of solid-liquid coexistence properties of neon. Our findings indicate that quantum contributions to properties such as, melting point, latent heat of fusion, entropy of fusion, and slope of melting line can be up to 10% of the calculated values using the classical approximation. [1] R. M. Ramirez, C. P. Herrero, A. Antonelli, and E. R. Hernández, Journal of Chemical Physics 129, 064110 (2008)
Li, Tao; Zhang, Xiong; Zeng, Qiang; Wang, Bo; Zhang, Xiangdong
2018-04-30
The Clauser-Horne-Shimony-Holt (CHSH) inequality and the Klyachko-Can-Binicioglu-Shumovski (KCBS) inequality present a tradeoff on the no-disturbance (ND) principle. Recently, the fundamental monogamy relation between contextuality and nonlocality in quantum theory has been demonstrated experimentally. Here we show that such a relation and tradeoff can also be simulated in classical optical systems. Using polarization, path and orbital angular momentum of the classical optical beam, in classical optical experiment we have observed the stringent monogamy relation between the two inequalities by implementing the projection measurement. Our results show the application prospect of the concepts developed recently in quantum information science to classical optical system and optical information processing.
Pelzer, Kenley M.; Vázquez-Mayagoitia, Álvaro; Ratcliff, Laura E.; ...
2017-01-01
Organic photovoltaics (OPVs) are a promising carbon-neutral energy conversion technology, with recent improvements pushing power conversion efficiencies over 10%. A major factor limiting OPV performance is inefficiency of charge transport in organic semiconducting materials (OSCs). Due to strong coupling with lattice degrees of freedom, the charges form polarons, localized quasi-particles comprised of charges dressed with phonons. These polarons can be conceptualized as pseudo-atoms with a greater effective mass than a bare charge. Here we propose that due to this increased mass, polarons can be modeled with Langevin molecular dynamics (LMD), a classical approach with a computational cost much lower thanmore » most quantum mechanical methods. Here we present LMD simulations of charge transfer between a pair of fullerene molecules, which commonly serve as electron acceptors in OSCs. We find transfer rates consistent with experimental measurements of charge mobility, suggesting that this method may provide quantitative predictions of efficiency when used to simulate materials on the device scale. Our approach also offers information that is not captured in the overall transfer rate or mobility: in the simulation data, we observe exactly when and why intermolecular transfer events occur. In addition, we demonstrate that these simulations can shed light on the properties of polarons in OSCs. In conclusion, much remains to be learned about these quasi-particles, and there are no widely accepted methods for calculating properties such as effective mass and friction. Lastly, our model offers a promising approach to exploring mass and friction as well as providing insight into the details of polaron transport in OSCs.« less
Sellers, Michael S; Lísal, Martin; Brennan, John K
2016-03-21
We present an extension of various free-energy methodologies to determine the chemical potential of the solid and liquid phases of a fully-flexible molecule using classical simulation. The methods are applied to the Smith-Bharadwaj atomistic potential representation of cyclotrimethylene trinitramine (RDX), a well-studied energetic material, to accurately determine the solid and liquid phase Gibbs free energies, and the melting point (Tm). We outline an efficient technique to find the absolute chemical potential and melting point of a fully-flexible molecule using one set of simulations to compute the solid absolute chemical potential and one set of simulations to compute the solid-liquid free energy difference. With this combination, only a handful of simulations are needed, whereby the absolute quantities of the chemical potentials are obtained, for use in other property calculations, such as the characterization of crystal polymorphs or the determination of the entropy. Using the LAMMPS molecular simulator, the Frenkel and Ladd and pseudo-supercritical path techniques are adapted to generate 3rd order fits of the solid and liquid chemical potentials. Results yield the thermodynamic melting point Tm = 488.75 K at 1.0 atm. We also validate these calculations and compare this melting point to one obtained from a typical superheated simulation technique.
New method for estimating arterial pulse wave velocity at single site.
Abdessalem, Khaled Ben; Flaud, Patrice; Zobaidi, Samir
2018-01-01
The clinical importance of measuring local pulse wave velocity (PWV), has encouraged researchers to develop several local methods to estimate it. In this work, we proposed a new method, the sum-of-squares method [Formula: see text], that allows the estimations of PWV by using simultaneous measurements of blood pressure (P) and arterial diameter (D) at single-location. Pulse waveforms generated by: (1) two-dimensional (2D) fluid-structure interaction simulation (FSI) in a compliant tube, (2) one-dimensional (1D) model of 55 larger human systemic arteries and (3) experimental data were used to validate the new formula and evaluate several classical methods. The performance of the proposed method was assessed by comparing its results to theoretical PWV calculated from the parameters of the model and/or to PWV estimated by several classical methods. It was found that values of PWV obtained by the developed method [Formula: see text] are in good agreement with theoretical ones and with those calculated by PA-loop and D 2 P-loop. The difference between the PWV calculated by [Formula: see text] and PA-loop does not exceed 1% when data from simulations are used, 3% when in vitro data are used and 5% when in vivo data are used. In addition, this study suggests that estimated PWV from arterial pressure and diameter waveforms provide correct values while methods that require flow rate (Q) and velocity (U) overestimate or underestimate PWV.
Using collective variables to drive molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Fiorin, Giacomo; Klein, Michael L.; Hénin, Jérôme
2013-12-01
A software framework is introduced that facilitates the application of biasing algorithms to collective variables of the type commonly employed to drive massively parallel molecular dynamics (MD) simulations. The modular framework that is presented enables one to combine existing collective variables into new ones, and combine any chosen collective variable with available biasing methods. The latter include the classic time-dependent biases referred to as steered MD and targeted MD, the temperature-accelerated MD algorithm, as well as the adaptive free-energy biases called metadynamics and adaptive biasing force. The present modular software is extensible, and portable between commonly used MD simulation engines.
Numerical reconstruction and injury biomechanism in a car-pedestrian crash accident.
Zou, Dong-Hua; Li, Zheng-Dong; Shao, Yu; Feng, Hao; Chen, Jian-Guo; Liu, Ning-Guo; Huang, Ping; Chen, Yi-Jiu
2012-12-01
To reconstruct a car-pedestrian crash accident using numerical simulation technology and explore the injury biomechanism as forensic evidence for injury identification. An integration of multi-body dynamic, finite element (FE), and classical method was applied to a car-pedestrian crash accident. The location of the collision and the details of the traffic accident were determined by vehicle trace verification and autopsy. The accident reconstruction was performed by coupling the three-dimensional car behavior from PC-CRASH with a MADYMO dummy model. The collision FE models of head and leg, developed from CT scans of human remains, were loaded with calculated dummy collision parameters. The data of the impact biomechanical responses were extracted in terms of von Mises stress, relative displacement, strain and stress fringes. The accident reconstruction results were identical with the examined ones and the biomechanism of head and leg injuries, illustrated through the FE methods, were consistent with the classical injury theories. The numerical simulation technology is proved to be effective in identifying traffic accidents and exploring of injury biomechanism.
Least-squares dual characterization for ROI assessment in emission tomography
NASA Astrophysics Data System (ADS)
Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.
2013-06-01
Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.
de Bock, Élodie; Hardouin, Jean-Benoit; Blanchin, Myriam; Le Neel, Tanguy; Kubis, Gildas; Bonnaud-Antignac, Angélique; Dantan, Étienne; Sébille, Véronique
2016-10-01
The objective was to compare classical test theory and Rasch-family models derived from item response theory for the analysis of longitudinal patient-reported outcomes data with possibly informative intermittent missing items. A simulation study was performed in order to assess and compare the performance of classical test theory and Rasch model in terms of bias, control of the type I error and power of the test of time effect. The type I error was controlled for classical test theory and Rasch model whether data were complete or some items were missing. Both methods were unbiased and displayed similar power with complete data. When items were missing, Rasch model remained unbiased and displayed higher power than classical test theory. Rasch model performed better than the classical test theory approach regarding the analysis of longitudinal patient-reported outcomes with possibly informative intermittent missing items mainly for power. This study highlights the interest of Rasch-based models in clinical research and epidemiology for the analysis of incomplete patient-reported outcomes data. © The Author(s) 2013.
Enstrophy Cascade in Decaying Two-Dimensional Quantum Turbulence
NASA Astrophysics Data System (ADS)
Reeves, Matthew T.; Billam, Thomas P.; Yu, Xiaoquan; Bradley, Ashton S.
2017-11-01
We report evidence for an enstrophy cascade in large-scale point-vortex simulations of decaying two-dimensional quantum turbulence. Devising a method to generate quantum vortex configurations with kinetic energy narrowly localized near a single length scale, the dynamics are found to be well characterized by a superfluid Reynolds number Res that depends only on the number of vortices and the initial kinetic energy scale. Under free evolution the vortices exhibit features of a classical enstrophy cascade, including a k-3 power-law kinetic energy spectrum, and constant enstrophy flux associated with inertial transport to small scales. Clear signatures of the cascade emerge for N ≳500 vortices. Simulating up to very large Reynolds numbers (N =32 768 vortices), additional features of the classical theory are observed: the Kraichnan-Batchelor constant is found to converge to C'≈1.6 , and the width of the k-3 range scales as Res1 /2 .
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
Simulating and assessing boson sampling experiments with phase-space representations
NASA Astrophysics Data System (ADS)
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
NASA Technical Reports Server (NTRS)
Clements, Keith; Wall, John
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
NASA Technical Reports Server (NTRS)
Clements, Keith; Wall, John
2017-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.
Howard, Rebecca J; Carnevale, Vincenzo; Delemotte, Lucie; Hellmich, Ute A; Rothberg, Brad S
2018-04-01
Ion translocation across biological barriers is a fundamental requirement for life. In many cases, controlling this process-for example with neuroactive drugs-demands an understanding of rapid and reversible structural changes in membrane-embedded proteins, including ion channels and transporters. Classical approaches to electrophysiology and structural biology have provided valuable insights into several such proteins over macroscopic, often discontinuous scales of space and time. Integrating these observations into meaningful mechanistic models now relies increasingly on computational methods, particularly molecular dynamics simulations, while surfacing important challenges in data management and conceptual alignment. Here, we seek to provide contemporary context, concrete examples, and a look to the future for bridging disciplinary gaps in biological ion transport. This article is part of a Special Issue entitled: Beyond the Structure-Function Horizon of Membrane Proteins edited by Ute Hellmich, Rupak Doshi and Benjamin McIlwain. Copyright © 2017 Elsevier B.V. All rights reserved.
Discrete stochastic simulation methods for chemically reacting systems.
Cao, Yang; Samuels, David C
2009-01-01
Discrete stochastic chemical kinetics describe the time evolution of a chemically reacting system by taking into account the fact that, in reality, chemical species are present with integer populations and exhibit some degree of randomness in their dynamical behavior. In recent years, with the development of new techniques to study biochemistry dynamics in a single cell, there are increasing studies using this approach to chemical kinetics in cellular systems, where the small copy number of some reactant species in the cell may lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. This chapter reviews the fundamental theory related to stochastic chemical kinetics and several simulation methods based on that theory. We focus on nonstiff biochemical systems and the two most important discrete stochastic simulation methods: Gillespie's stochastic simulation algorithm (SSA) and the tau-leaping method. Different implementation strategies of these two methods are discussed. Then we recommend a relatively simple and efficient strategy that combines the strengths of the two methods: the hybrid SSA/tau-leaping method. The implementation details of the hybrid strategy are given here and a related software package is introduced. Finally, the hybrid method is applied to simple biochemical systems as a demonstration of its application.
Simulation of 2D rarefied gas flows based on the numerical solution of the Boltzmann equation
NASA Astrophysics Data System (ADS)
Poleshkin, Sergey O.; Malkov, Ewgenij A.; Kudryavtsev, Alexey N.; Shershnev, Anton A.; Bondar, Yevgeniy A.; Kohanchik, A. A.
2017-10-01
There are various methods for calculating rarefied gas flows, in particular, statistical methods and deterministic methods based on the finite-difference solutions of the Boltzmann nonlinear kinetic equation and on the solutions of model kinetic equations. There is no universal method; each has its disadvantages in terms of efficiency or accuracy. The choice of the method depends on the problem to be solved and on parameters of calculated flows. Qualitative theoretical arguments help to determine the range of parameters of effectively solved problems for each method; however, it is advisable to perform comparative tests of calculations of the classical problems performed by different methods and with different parameters to have quantitative confirmation of this reasoning. The paper provides the results of the calculations performed by the authors with the help of the Direct Simulation Monte Carlo method and finite-difference methods of solving the Boltzmann equation and model kinetic equations. Based on this comparison, conclusions are made on selecting a particular method for flow simulations in various ranges of flow parameters.
NASA Astrophysics Data System (ADS)
Choi, Eunsong
Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We conclude the study by showing an excellent agreement between the simulation and the experiment.
Solving search problems by strongly simulating quantum circuits
Johnson, T. H.; Biamonte, J. D.; Clark, S. R.; Jaksch, D.
2013-01-01
Simulating quantum circuits using classical computers lets us analyse the inner workings of quantum algorithms. The most complete type of simulation, strong simulation, is believed to be generally inefficient. Nevertheless, several efficient strong simulation techniques are known for restricted families of quantum circuits and we develop an additional technique in this article. Further, we show that strong simulation algorithms perform another fundamental task: solving search problems. Efficient strong simulation techniques allow solutions to a class of search problems to be counted and found efficiently. This enhances the utility of strong simulation methods, known or yet to be discovered, and extends the class of search problems known to be efficiently simulable. Relating strong simulation to search problems also bounds the computational power of efficiently strongly simulable circuits; if they could solve all problems in P this would imply that all problems in NP and #P could be solved in polynomial time. PMID:23390585
Generating Neuron Geometries for Detailed Three-Dimensional Simulations Using AnaMorph.
Mörschel, Konstantin; Breit, Markus; Queisser, Gillian
2017-07-01
Generating realistic and complex computational domains for numerical simulations is often a challenging task. In neuroscientific research, more and more one-dimensional morphology data is becoming publicly available through databases. This data, however, only contains point and diameter information not suitable for detailed three-dimensional simulations. In this paper, we present a novel framework, AnaMorph, that automatically generates water-tight surface meshes from one-dimensional point-diameter files. These surface triangulations can be used to simulate the electrical and biochemical behavior of the underlying cell. In addition to morphology generation, AnaMorph also performs quality control of the semi-automatically reconstructed cells coming from anatomical reconstructions. This toolset allows an extension from the classical dimension-reduced modeling and simulation of cellular processes to a full three-dimensional and morphology-including method, leading to novel structure-function interplay studies in the medical field. The developed numerical methods can further be employed in other areas where complex geometries are an essential component of numerical simulations.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
Quantum-Classical Correspondence Principle for Work Distributions
NASA Astrophysics Data System (ADS)
Jarzynski, Christopher; Quan, H. T.; Rahav, Saar
2015-07-01
For closed quantum systems driven away from equilibrium, work is often defined in terms of projective measurements of initial and final energies. This definition leads to statistical distributions of work that satisfy nonequilibrium work and fluctuation relations. While this two-point measurement definition of quantum work can be justified heuristically by appeal to the first law of thermodynamics, its relationship to the classical definition of work has not been carefully examined. In this paper, we employ semiclassical methods, combined with numerical simulations of a driven quartic oscillator, to study the correspondence between classical and quantal definitions of work in systems with 1 degree of freedom. We find that a semiclassical work distribution, built from classical trajectories that connect the initial and final energies, provides an excellent approximation to the quantum work distribution when the trajectories are assigned suitable phases and are allowed to interfere. Neglecting the interferences between trajectories reduces the distribution to that of the corresponding classical process. Hence, in the semiclassical limit, the quantum work distribution converges to the classical distribution, decorated by a quantum interference pattern. We also derive the form of the quantum work distribution at the boundary between classically allowed and forbidden regions, where this distribution tunnels into the forbidden region. Our results clarify how the correspondence principle applies in the context of quantum and classical work distributions and contribute to the understanding of work and nonequilibrium work relations in the quantum regime.
Fourier analysis and signal processing by use of the Moebius inversion formula
NASA Technical Reports Server (NTRS)
Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.
1990-01-01
A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.
NASA Astrophysics Data System (ADS)
Gharibnezhad, Fahit; Mujica, Luis E.; Rodellar, José
2015-01-01
Using Principal Component Analysis (PCA) for Structural Health Monitoring (SHM) has received considerable attention over the past few years. PCA has been used not only as a direct method to identify, classify and localize damages but also as a significant primary step for other methods. Despite several positive specifications that PCA conveys, it is very sensitive to outliers. Outliers are anomalous observations that can affect the variance and the covariance as vital parts of PCA method. Therefore, the results based on PCA in the presence of outliers are not fully satisfactory. As a main contribution, this work suggests the use of robust variant of PCA not sensitive to outliers, as an effective way to deal with this problem in SHM field. In addition, the robust PCA is compared with the classical PCA in the sense of detecting probable damages. The comparison between the results shows that robust PCA can distinguish the damages much better than using classical one, and even in many cases allows the detection where classic PCA is not able to discern between damaged and non-damaged structures. Moreover, different types of robust PCA are compared with each other as well as with classical counterpart in the term of damage detection. All the results are obtained through experiments with an aircraft turbine blade using piezoelectric transducers as sensors and actuators and adding simulated damages.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Self-Learning Monte Carlo Method
NASA Astrophysics Data System (ADS)
Liu, Junwei; Qi, Yang; Meng, Zi Yang; Fu, Liang
Monte Carlo simulation is an unbiased numerical tool for studying classical and quantum many-body systems. One of its bottlenecks is the lack of general and efficient update algorithm for large size systems close to phase transition or with strong frustrations, for which local updates perform badly. In this work, we propose a new general-purpose Monte Carlo method, dubbed self-learning Monte Carlo (SLMC), in which an efficient update algorithm is first learned from the training data generated in trial simulations and then used to speed up the actual simulation. We demonstrate the efficiency of SLMC in a spin model at the phase transition point, achieving a 10-20 times speedup. This work is supported by the DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award DE-SC0010526.
The polymer physics of single DNA confined in nanochannels.
Dai, Liang; Renner, C Benjamin; Doyle, Patrick S
2016-06-01
In recent years, applications and experimental studies of DNA in nanochannels have stimulated the investigation of the polymer physics of DNA in confinement. Recent advances in the physics of confined polymers, using DNA as a model polymer, have moved beyond the classic Odijk theory for the strong confinement, and the classic blob theory for the weak confinement. In this review, we present the current understanding of the behaviors of confined polymers while briefly reviewing classic theories. Three aspects of confined DNA are presented: static, dynamic, and topological properties. The relevant simulation methods are also summarized. In addition, comparisons of confined DNA with DNA under tension and DNA in semidilute solution are made to emphasize universal behaviors. Finally, an outlook of the possible future research for confined DNA is given. Copyright © 2015 Elsevier B.V. All rights reserved.
2016-01-01
Molecular mechanics force fields that explicitly account for induced polarization represent the next generation of physical models for molecular dynamics simulations. Several methods exist for modeling induced polarization, and here we review the classical Drude oscillator model, in which electronic degrees of freedom are modeled by charged particles attached to the nuclei of their core atoms by harmonic springs. We describe the latest developments in Drude force field parametrization and application, primarily in the last 15 years. Emphasis is placed on the Drude-2013 polarizable force field for proteins, DNA, lipids, and carbohydrates. We discuss its parametrization protocol, development history, and recent simulations of biologically interesting systems, highlighting specific studies in which induced polarization plays a critical role in reproducing experimental observables and understanding physical behavior. As the Drude oscillator model is computationally tractable and available in a wide range of simulation packages, it is anticipated that use of these more complex physical models will lead to new and important discoveries of the physical forces driving a range of chemical and biological phenomena. PMID:26815602
Yang, Li; Sun, Rui; Hase, William L
2011-11-08
In a previous study (J. Chem. Phys.2008, 129, 094701) it was shown that for a large molecule, with a total energy much greater than its barrier for decomposition and whose vibrational modes are harmonic oscillators, the expressions for the classical Rice-Ramsperger-Kassel-Marcus (RRKM) (i.e., RRK) and classical transition-state theory (TST) rate constants become equivalent. Using this relationship, a molecule's unimolecular rate constants versus temperature may be determined from chemical dynamics simulations of microcanonical ensembles for the molecule at different total energies. The simulation identifies the molecule's unimolecular pathways and their Arrhenius parameters. In the work presented here, this approach is used to study the thermal decomposition of CH3-NH-CH═CH-CH3, an important constituent in the polymer of cross-linked epoxy resins. Direct dynamics simulations, at the MP2/6-31+G* level of theory, were used to investigate the decomposition of microcanonical ensembles for this molecule. The Arrhenius A and Ea parameters determined from the direct dynamics simulation are in very good agreement with the TST Arrhenius parameters for the MP2/6-31+G* potential energy surface. The simulation method applied here may be particularly useful for large molecules with a multitude of decomposition pathways and whose transition states may be difficult to determine and have structures that are not readily obvious.
Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deschamps, T; Schwartz, P; Trebotich, D
In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured meshmore » that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.« less
NASA Astrophysics Data System (ADS)
Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.
2018-05-01
Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Diffusion Dynamics and Creative Destruction in a Simple Classical Model
2015-01-01
ABSTRACT The article explores the impact of the diffusion of new methods of production on output and employment growth and income distribution within a Classical one‐sector framework. Disequilibrium paths are studied analytically and in terms of simulations. Diffusion by differential growth affects aggregate dynamics through several channels. The analysis reveals the non‐steady nature of economic change and shows that the adaptation pattern depends both on the innovation's factor‐saving bias and on the extent of the bias, which determines the strength of the selection pressure on non‐innovators. The typology of different cases developed shows various aspects of Schumpeter's concept of creative destruction. PMID:27642192
How Often Do Subscores Have Added Value? Results from Operational and Simulated Data
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman suggested a method based on classical test theory to determine whether subscores have added value over total scores. In this article I first provide a rich collection of results regarding when subscores were found to have added…
NASA Astrophysics Data System (ADS)
Kahros, Argyris
Incorporating quantum mechanics into an atomistic simulation necessarily involves solving the Schrodinger equation. Unfortunately, the computational expense associated with solving this equation scales miserably with the number of included quantum degrees of freedom (DOF). The situation is so dire, in fact, that a molecular dynamics (MD) simulation cannot include more than a small number of quantum DOFs before it becomes computationally intractable. Thus, if one were to simulate a relatively large system, such as one containing several hundred atoms or molecules, it would be unreasonable to attempt to include the effects of all of the electrons associated with all of the components of the system. The mixed quantum/classical (MQC) approach provides a way to circumvent this issue. It involves treating the vast majority of the system classically, which incurs minimal computational expense, and reserves the consideration of quantum mechanical effects for only the few degrees of freedom more directly involved in the chemical phenomenon being studied. For example, if one were to study the bonding of a single diatomic molecule in the gas phase, one could employ a MQC approach by treating the nuclei of the molecule's two atoms classically---including the deeply bound, low-energy electrons that change relatively little---and solving the Schrodinger equation only for the high energy electron(s) directly involved in the bonding of the classical cores. In such a way, one could study the bonding of this molecule in a rigorous fashion while treating only the directly related degrees of freedom quantum mechanically. Pseudopotentials are then responsible for dictating the interactions between the quantum and classical degrees of freedom. As these potentials are the sole link between the quantum and classical DOFs, their proper development is of the utmost importance. This Thesis is concerned primarily with my work on the development of novel, rigorous and dynamical pseudopotentials for use in mixed quantum/ classical simulations in the condensed phase. The pseudopotentials discussed within are constructed in an ab initio fashion, without the introduction of any empiricism, and are able to exactly reproduce the results of higher level, fully quantum mechanical Hartree-Fock calculations. A recurring theme in the following pages is overcoming the so-called frozen core approximation (FCA). This essentially comes down to creating pseudopotentials that are able to respond in some way to the local molecular environment in a rigorous fashion. The various methods and discussions that are part of this document are presented in the context of two particular systems. The first is the sodium dimer cation molecule, which serves as a proof of concept for the development of coordinate-dependent pseudopotentials and is the subject of Chapters 2 and 3. Next, the hydrated electron---the excess electron in liquid water---is tackled in an effort to address the recent controversy concerning its true structure and is the subject of Chapters 4 and 5. In essence, the work in this Dissertation is concerned with finding new ways to overcome the problem of a lack of infinite computer processing power.
Estimating the Error of an Analog Quantum Simulator by Additional Measurements
NASA Astrophysics Data System (ADS)
Schwenk, Iris; Zanker, Sebastian; Reiner, Jan-Michael; Leppäkangas, Juha; Marthaler, Michael
2017-12-01
We study an analog quantum simulator coupled to a reservoir with a known spectral density. The reservoir perturbs the quantum simulation by causing decoherence. The simulator is used to measure an operator average, which cannot be calculated using any classical means. Since we cannot predict the result, it is difficult to estimate the effect of the environment. Especially, it is difficult to resolve whether the perturbation is small or if the actual result of the simulation is in fact very different from the ideal system we intend to study. Here, we show that in specific systems a measurement of additional correlators can be used to verify the reliability of the quantum simulation. The procedure only requires additional measurements on the quantum simulator itself. We demonstrate the method theoretically in the case of a single spin connected to a bosonic environment.
Rana, Malay Kumar; Chandra, Amalendu
2013-05-28
The behavior of water near a graphene sheet is investigated by means of ab initio and classical molecular dynamics simulations. The wetting of the graphene sheet by ab initio water and the relation of such behavior to the strength of classical dispersion interaction between surface atoms and water are explored. The first principles simulations reveal a layered solvation structure around the graphene sheet with a significant water density in the interfacial region implying no drying or cavitation effect. It is found that the ab initio results of water density at interfaces can be reproduced reasonably well by classical simulations with a tuned dispersion potential between the surface and water molecules. Calculations of vibrational power spectrum from ab initio simulations reveal a shift of the intramolecular stretch modes to higher frequencies for interfacial water molecules when compared with those of the second solvation later or bulk-like water due to the presence of free OH modes near the graphene sheet. Also, a weakening of the water-water hydrogen bonds in the vicinity of the graphene surface is found in our ab initio simulations as reflected in the shift of intermolecular vibrational modes to lower frequencies for interfacial water molecules. The first principles calculations also reveal that the residence and orientational dynamics of interfacial water are somewhat slower than those of the second layer or bulk-like molecules. However, the lateral diffusion and hydrogen bond relaxation of interfacial water molecules are found to occur at a somewhat faster rate than that of the bulk-like water molecules. The classical molecular dynamics simulations with tuned Lennard-Jones surface-water interaction are found to produce dynamical results that are qualitatively similar to those of ab initio molecular dynamics simulations.
NASA Astrophysics Data System (ADS)
Schubert, Alexander; Falvo, Cyril; Meier, Christoph
2016-08-01
We present mixed quantum-classical simulations on relaxation and dephasing of vibrationally excited carbon monoxide within a protein environment. The methodology is based on a vibrational surface hopping approach treating the vibrational states of CO quantum mechanically, while all remaining degrees of freedom are described by means of classical molecular dynamics. The CO vibrational states form the "surfaces" for the classical trajectories of protein and solvent atoms. In return, environmentally induced non-adiabatic couplings between these states cause transitions describing the vibrational relaxation from first principles. The molecular dynamics simulation yields a detailed atomistic picture of the energy relaxation pathways, taking the molecular structure and dynamics of the protein and its solvent fully into account. Using the ultrafast photolysis of CO in the hemoprotein FixL as an example, we study the relaxation of vibrationally excited CO and evaluate the role of each of the FixL residues forming the heme pocket.
In vitro dynamic model simulating the digestive tract of 6-month-old infants
Gallo, Marianna; Tornatore, Fabio; Frasso, Annalisa; Saccone, Giulia; Budelli, Andrea; Barone, Maria V.
2017-01-01
Background In vivo assays cannot always be conducted because of ethical reasons, technical constraints or costs, but a better understanding of the digestive process, especially in infants, could be of great help in preventing food-related pathologies and in developing new formulas with health benefits. In this context, in vitro dynamic systems to simulate human digestion and, in particular, infant digestion could become increasingly valuable. Objective To simulate the digestive process through the use of a dynamic model of the infant gastroenteric apparatus to study the digestibility of starch-based infant foods. Design Using M.I.D.A (Model of an Infant Digestive Apparatus), the oral, gastric and intestinal digestibility of two starch-based products were measured: 1) rice starch mixed with distilled water and treated using two different sterilization methods (the classical method with a holding temperature of 121°C for 37 min and the HTST method with a holding temperature of 137°C for 70 sec) and 2) a rice cream with (premium product) or without (basic product) an aliquot of rice flour fermented by Lactobacillus paracasei CBA L74. After the digestion the foods were analyzed for the starch concentration, the amount of D-glucose released and the percentage of hydrolyzed starch. Results An in vitro dynamic system, which was referred to as M.I.D.A., was obtained. Using this system, the starch digestion occurred only during the oral and intestinal phase, as expected. The D-glucose released during the intestinal phase was different between the classical and HTST methods (0.795 grams for the HTST versus 0.512 for the classical product). The same analysis was performed for the basic and premium products. In this case, the premium product had a significant difference in terms of the starch hydrolysis percentage during the entire process. Conclusions The M.I.D.A. system was able to digest simple starches and a more complex food in the correct compartments. In this study, better digestibility of the premium product was revealed. PMID:29261742
Vázquez-Mayagoitia, Álvaro; Ratcliff, Laura E.; Tretiak, Sergei; Bair, Raymond A.; Gray, Stephen K.; Van Voorhis, Troy; Larsen, Ross E.; Darling, Seth B.
2017-01-01
Organic photovoltaics (OPVs) are a promising carbon-neutral energy conversion technology, with recent improvements pushing power conversion efficiencies over 10%. A major factor limiting OPV performance is inefficiency of charge transport in organic semiconducting materials (OSCs). Due to strong coupling with lattice degrees of freedom, the charges form polarons, localized quasi-particles comprised of charges dressed with phonons. These polarons can be conceptualized as pseudo-atoms with a greater effective mass than a bare charge. We propose that due to this increased mass, polarons can be modeled with Langevin molecular dynamics (LMD), a classical approach with a computational cost much lower than most quantum mechanical methods. Here we present LMD simulations of charge transfer between a pair of fullerene molecules, which commonly serve as electron acceptors in OSCs. We find transfer rates consistent with experimental measurements of charge mobility, suggesting that this method may provide quantitative predictions of efficiency when used to simulate materials on the device scale. Our approach also offers information that is not captured in the overall transfer rate or mobility: in the simulation data, we observe exactly when and why intermolecular transfer events occur. In addition, we demonstrate that these simulations can shed light on the properties of polarons in OSCs. Much remains to be learned about these quasi-particles, and there are no widely accepted methods for calculating properties such as effective mass and friction. Our model offers a promising approach to exploring mass and friction as well as providing insight into the details of polaron transport in OSCs. PMID:28553494
The Propulsive-Only Flight Control Problem
NASA Technical Reports Server (NTRS)
Blezad, Daniel J.
1996-01-01
Attitude control of aircraft using only the throttles is investigated. The long time constants of both the engines and of the aircraft dynamics, together with the coupling between longitudinal and lateral aircraft modes make piloted flight with failed control surfaces hazardous, especially when attempting to land. This research documents the results of in-flight operation using simulated failed flight controls and ground simulations of piloted propulsive-only control to touchdown. Augmentation control laws to assist the pilot are described using both optimal control and classical feedback methods. Piloted simulation using augmentation shows that simple and effective augmented control can be achieved in a wide variety of failed configurations.
Calibration of DEM parameters on shear test experiments using Kriging method
NASA Astrophysics Data System (ADS)
Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier
2017-06-01
Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
Quantum realization of the nearest neighbor value interpolation method for INEQR
NASA Astrophysics Data System (ADS)
Zhou, RiGui; Hu, WenWen; Luo, GaoFeng; Liu, XingAo; Fan, Ping
2018-07-01
This paper presents the nearest neighbor value (NNV) interpolation algorithm for the improved novel enhanced quantum representation of digital images (INEQR). It is necessary to use interpolation in image scaling because there is an increase or a decrease in the number of pixels. The difference between the proposed scheme and nearest neighbor interpolation is that the concept applied, to estimate the missing pixel value, is guided by the nearest value rather than the distance. Firstly, a sequence of quantum operations is predefined, such as cyclic shift transformations and the basic arithmetic operations. Then, the feasibility of the nearest neighbor value interpolation method for quantum image of INEQR is proven using the previously designed quantum operations. Furthermore, quantum image scaling algorithm in the form of circuits of the NNV interpolation for INEQR is constructed for the first time. The merit of the proposed INEQR circuit lies in their low complexity, which is achieved by utilizing the unique properties of quantum superposition and entanglement. Finally, simulation-based experimental results involving different classical images and ratios (i.e., conventional or non-quantum) are simulated based on the classical computer's MATLAB 2014b software, which demonstrates that the proposed interpolation method has higher performances in terms of high resolution compared to the nearest neighbor and bilinear interpolation.
A Force Balanced Fragmentation Method for ab Initio Molecular Dynamic Simulation of Protein.
Xu, Mingyuan; Zhu, Tong; Zhang, John Z H
2018-01-01
A force balanced generalized molecular fractionation with conjugate caps (FB-GMFCC) method is proposed for ab initio molecular dynamic simulation of proteins. In this approach, the energy of the protein is computed by a linear combination of the QM energies of individual residues and molecular fragments that account for the two-body interaction of hydrogen bond between backbone peptides. The atomic forces on the caped H atoms were corrected to conserve the total force of the protein. Using this approach, ab initio molecular dynamic simulation of an Ace-(ALA) 9 -NME linear peptide showed the conservation of the total energy of the system throughout the simulation. Further a more robust 110 ps ab initio molecular dynamic simulation was performed for a protein with 56 residues and 862 atoms in explicit water. Compared with the classical force field, the ab initio molecular dynamic simulations gave better description of the geometry of peptide bonds. Although further development is still needed, the current approach is highly efficient, trivially parallel, and can be applied to ab initio molecular dynamic simulation study of large proteins.
Hierarchical Coupling of First-Principles Molecular Dynamics with Advanced Sampling Methods.
Sevgen, Emre; Giberti, Federico; Sidky, Hythem; Whitmer, Jonathan K; Galli, Giulia; Gygi, Francois; de Pablo, Juan J
2018-05-14
We present a seamless coupling of a suite of codes designed to perform advanced sampling simulations, with a first-principles molecular dynamics (MD) engine. As an illustrative example, we discuss results for the free energy and potential surfaces of the alanine dipeptide obtained using both local and hybrid density functionals (DFT), and we compare them with those of a widely used classical force field, Amber99sb. In our calculations, the efficiency of first-principles MD using hybrid functionals is augmented by hierarchical sampling, where hybrid free energy calculations are initiated using estimates obtained with local functionals. We find that the free energy surfaces obtained from classical and first-principles calculations differ. Compared to DFT results, the classical force field overestimates the internal energy contribution of high free energy states, and it underestimates the entropic contribution along the entire free energy profile. Using the string method, we illustrate how these differences lead to different transition pathways connecting the metastable minima of the alanine dipeptide. In larger peptides, those differences would lead to qualitatively different results for the equilibrium structure and conformation of these molecules.
Shortcuts to adiabaticity using flow fields
NASA Astrophysics Data System (ADS)
Patra, Ayoti; Jarzynski, Christopher
2017-12-01
A shortcut to adiabaticity is a recipe for generating adiabatic evolution at an arbitrary pace. Shortcuts have been developed for quantum, classical and (most recently) stochastic dynamics. A shortcut might involve a counterdiabatic (CD) Hamiltonian that causes a system to follow the adiabatic evolution at all times, or it might utilize a fast-forward (FF) potential, which returns the system to the adiabatic path at the end of the process. We develop a general framework for constructing shortcuts to adiabaticity from flow fields that describe the desired adiabatic evolution. Our approach encompasses quantum, classical and stochastic dynamics, and provides surprisingly compact expressions for both CD Hamiltonians and FF potentials. We illustrate our method with numerical simulations of a model system, and we compare our shortcuts with previously obtained results. We also consider the semiclassical connections between our quantum and classical shortcuts. Our method, like the FF approach developed by previous authors, is susceptible to singularities when applied to excited states of quantum systems; we propose a simple, intuitive criterion for determining whether these singularities will arise, for a given excited state.
ERIC Educational Resources Information Center
Bazaldua, Diego A. Luna; Lee, Young-Sun; Keller, Bryan; Fellers, Lauren
2017-01-01
The performance of various classical test theory (CTT) item discrimination estimators has been compared in the literature using both empirical and simulated data, resulting in mixed results regarding the preference of some discrimination estimators over others. This study analyzes the performance of various item discrimination estimators in CTT:…
Kendon, Vivien M; Nemoto, Kae; Munro, William J
2010-08-13
We briefly review what a quantum computer is, what it promises to do for us and why it is so hard to build one. Among the first applications anticipated to bear fruit is the quantum simulation of quantum systems. While most quantum computation is an extension of classical digital computation, quantum simulation differs fundamentally in how the data are encoded in the quantum computer. To perform a quantum simulation, the Hilbert space of the system to be simulated is mapped directly onto the Hilbert space of the (logical) qubits in the quantum computer. This type of direct correspondence is how data are encoded in a classical analogue computer. There is no binary encoding, and increasing precision becomes exponentially costly: an extra bit of precision doubles the size of the computer. This has important consequences for both the precision and error-correction requirements of quantum simulation, and significant open questions remain about its practicality. It also means that the quantum version of analogue computers, continuous-variable quantum computers, becomes an equally efficient architecture for quantum simulation. Lessons from past use of classical analogue computers can help us to build better quantum simulators in future.
Quantum chemistry simulation on quantum computers: theories and experiments.
Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng
2012-07-14
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.
Quantum Fragment Based ab Initio Molecular Dynamics for Proteins.
Liu, Jinfeng; Zhu, Tong; Wang, Xianwei; He, Xiao; Zhang, John Z H
2015-12-08
Developing ab initio molecular dynamics (AIMD) methods for practical application in protein dynamics is of significant interest. Due to the large size of biomolecules, applying standard quantum chemical methods to compute energies for dynamic simulation is computationally prohibitive. In this work, a fragment based ab initio molecular dynamics approach is presented for practical application in protein dynamics study. In this approach, the energy and forces of the protein are calculated by a recently developed electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method. For simulation in explicit solvent, mechanical embedding is introduced to treat protein interaction with explicit water molecules. This AIMD approach has been applied to MD simulations of a small benchmark protein Trpcage (with 20 residues and 304 atoms) in both the gas phase and in solution. Comparison to the simulation result using the AMBER force field shows that the AIMD gives a more stable protein structure in the simulation, indicating that quantum chemical energy is more reliable. Importantly, the present fragment-based AIMD simulation captures quantum effects including electrostatic polarization and charge transfer that are missing in standard classical MD simulations. The current approach is linear-scaling, trivially parallel, and applicable to performing the AIMD simulation of proteins with a large size.
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
NASA Astrophysics Data System (ADS)
VandeVondele, Joost; Rothlisberger, Ursula
2000-09-01
We present a method for calculating multidimensional free energy surfaces within the limited time scale of a first-principles molecular dynamics scheme. The sampling efficiency is enhanced using selected terms of a classical force field as a bias potential. This simple procedure yields a very substantial increase in sampling accuracy while retaining the high quality of the underlying ab initio potential surface and can thus be used for a parameter free calculation of free energy surfaces. The success of the method is demonstrated by the applications to two gas phase molecules, ethane and peroxynitrous acid, as test case systems. A statistical analysis of the results shows that the entire free energy landscape is well converged within a 40 ps simulation at 500 K, even for a system with barriers as high as 15 kcal/mol.
CABS-flex: server for fast simulation of protein structure fluctuations
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2013-01-01
The CABS-flex server (http://biocomp.chem.uw.edu.pl/CABSflex) implements CABS-model–based protocol for the fast simulations of near-native dynamics of globular proteins. In this application, the CABS model was shown to be a computationally efficient alternative to all-atom molecular dynamics—a classical simulation approach. The simulation method has been validated on a large set of molecular dynamics simulation data. Using a single input (user-provided file in PDB format), the CABS-flex server outputs an ensemble of protein models (in all-atom PDB format) reflecting the flexibility of the input structure, together with the accompanying analysis (residue mean-square-fluctuation profile and others). The ensemble of predicted models can be used in structure-based studies of protein functions and interactions. PMID:23658222
CABS-flex: Server for fast simulation of protein structure fluctuations.
Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian
2013-07-01
The CABS-flex server (http://biocomp.chem.uw.edu.pl/CABSflex) implements CABS-model-based protocol for the fast simulations of near-native dynamics of globular proteins. In this application, the CABS model was shown to be a computationally efficient alternative to all-atom molecular dynamics--a classical simulation approach. The simulation method has been validated on a large set of molecular dynamics simulation data. Using a single input (user-provided file in PDB format), the CABS-flex server outputs an ensemble of protein models (in all-atom PDB format) reflecting the flexibility of the input structure, together with the accompanying analysis (residue mean-square-fluctuation profile and others). The ensemble of predicted models can be used in structure-based studies of protein functions and interactions.
Thermodynamics of reformulated automotive fuels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zudkevitch, D.; Murthy, A.K.S.; Gmehling, J.
1995-06-01
Two methods for predicting Reid vapor pressure (Rvp) and initial vapor emissions of reformulated gasoline blends that contain one or more oxygenated compounds show excellent agreement with experimental data. In the first method, method A, D-86 distillation data for gasoline blends are used for predicting Rvp from a simulation of the mini dry vapor pressure equivalent (Dvpe) experiment. The other method, method B, relies on analytical information (PIANO analyses) of the base gasoline and uses classical thermodynamics for simulating the same Rvp equivalent (Rvpe) mini experiment. Method B also predicts composition and other properties for the fuel`s initial vapor emission.more » Method B, although complex, is more useful in that is can predict properties of blends without a D-86 distillation. An important aspect of method B is its capability to predict composition of initial vapor emissions from gasoline blends. Thus, it offers a powerful tool to planners of gasoline blending. Method B uses theoretically sound formulas, rigorous thermodynamic routines and uses data and correlations of physical properties that are in the public domain. Results indicate that predictions made with both methods agree very well with experimental values of Dvpe. Computer simulation methods were programmed and tested.« less
Airplane numerical simulation for the rapid prototyping process
NASA Astrophysics Data System (ADS)
Roysdon, Paul F.
Airplane Numerical Simulation for the Rapid Prototyping Process is a comprehensive research investigation into the most up-to-date methods for airplane development and design. Uses of modern engineering software tools, like MatLab and Excel, are presented with examples of batch and optimization algorithms which combine the computing power of MatLab with robust aerodynamic tools like XFOIL and AVL. The resulting data is demonstrated in the development and use of a full non-linear six-degrees-of-freedom simulator. The applications for this numerical tool-box vary from un-manned aerial vehicles to first-order analysis of manned aircraft. A Blended-Wing-Body airplane is used for the analysis to demonstrate the flexibility of the code from classic wing-and-tail configurations to less common configurations like the blended-wing-body. This configuration has been shown to have superior aerodynamic performance -- in contrast to their classic wing-and-tube fuselage counterparts -- and have reduced sensitivity to aerodynamic flutter as well as potential for increased engine noise abatement. Of course without a classic tail elevator to damp the nose up pitching moment, and the vertical tail rudder to damp the yaw and possible rolling aerodynamics, the challenges in lateral roll and yaw stability, as well as pitching moment are not insignificant. This thesis work applies the tools necessary to perform the airplane development and optimization on a rapid basis, demonstrating the strength of this tool through examples and comparison of the results to similar airplane performance characteristics published in literature.
Multilevel Monte Carlo simulation of Coulomb collisions
Rosin, M. S.; Ricketson, L. F.; Dimits, A. M.; ...
2014-05-29
We present a new, for plasma physics, highly efficient multilevel Monte Carlo numerical method for simulating Coulomb collisions. The method separates and optimally minimizes the finite-timestep and finite-sampling errors inherent in the Langevin representation of the Landau–Fokker–Planck equation. It does so by combining multiple solutions to the underlying equations with varying numbers of timesteps. For a desired level of accuracy ε , the computational cost of the method is O(ε –2) or (ε –2(lnε) 2), depending on the underlying discretization, Milstein or Euler–Maruyama respectively. This is to be contrasted with a cost of O(ε –3) for direct simulation Monte Carlomore » or binary collision methods. We successfully demonstrate the method with a classic beam diffusion test case in 2D, making use of the Lévy area approximation for the correlated Milstein cross terms, and generating a computational saving of a factor of 100 for ε=10 –5. Lastly, we discuss the importance of the method for problems in which collisions constitute the computational rate limiting step, and its limitations.« less
Hybrid Quantum-Classical Approach to Quantum Optimal Control.
Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu
2017-04-14
A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.
QSPIN: A High Level Java API for Quantum Computing Experimentation
NASA Technical Reports Server (NTRS)
Barth, Tim
2017-01-01
QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.
Quantum theory of multiscale coarse-graining.
Han, Yining; Jin, Jaehyeok; Wagner, Jacob W; Voth, Gregory A
2018-03-14
Coarse-grained (CG) models serve as a powerful tool to simulate molecular systems at much longer temporal and spatial scales. Previously, CG models and methods have been built upon classical statistical mechanics. The present paper develops a theory and numerical methodology for coarse-graining in quantum statistical mechanics, by generalizing the multiscale coarse-graining (MS-CG) method to quantum Boltzmann statistics. A rigorous derivation of the sufficient thermodynamic consistency condition is first presented via imaginary time Feynman path integrals. It identifies the optimal choice of CG action functional and effective quantum CG (qCG) force field to generate a quantum MS-CG (qMS-CG) description of the equilibrium system that is consistent with the quantum fine-grained model projected onto the CG variables. A variational principle then provides a class of algorithms for optimally approximating the qMS-CG force fields. Specifically, a variational method based on force matching, which was also adopted in the classical MS-CG theory, is generalized to quantum Boltzmann statistics. The qMS-CG numerical algorithms and practical issues in implementing this variational minimization procedure are also discussed. Then, two numerical examples are presented to demonstrate the method. Finally, as an alternative strategy, a quasi-classical approximation for the thermal density matrix expressed in the CG variables is derived. This approach provides an interesting physical picture for coarse-graining in quantum Boltzmann statistical mechanics in which the consistency with the quantum particle delocalization is obviously manifest, and it opens up an avenue for using path integral centroid-based effective classical force fields in a coarse-graining methodology.
NASA Astrophysics Data System (ADS)
Holloway, Stephen
1997-03-01
When performing molecular dynamical simulations on light systems at low energies, there is always the risk of producing data that bear no similarity to experiment. Indeed, John Barker himself was particularly anxious about treating Ar scattering from surfaces using classical mechanics where it had been shown experimentally in his own lab that diffraction occurs. In such cases, the correct procedure is probably to play the trump card "... well of course, quantum effects will modify this so that....." and retire gracefully. For our particular interests, the tables are turned in that we are interested in gas-surface dynamical studies for highly quantized systems, but would be interested to know when it is possible to use classical mechanics in order that a greater dimensionality might be treated. For molecular dissociation and scattering, it has been oft quoted that the greater the number of degrees of freedom, the more appropriate is classical mechanics, primarily because of the mass averaging over the quantized dimensions. Is this true? We have been investigating the dissociation of hydrogen molecules at surfaces and in this talk I will present quantum results for dissociation and scattering, along with a novel method for their interpretation based upon adiabatic potential energy surfaces. Comparison with classical calculations will be made and conclusions drawn. a novel method for their interpretation based upon adiabatic potential energy surfaces
Sequential Geoacoustic Filtering and Geoacoustic Inversion
2015-09-30
and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions
Bypassing the malfunction junction in warm dense matter simulations
NASA Astrophysics Data System (ADS)
Cangi, Attila; Pribram-Jones, Aurora
2015-03-01
Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.
Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations
NASA Astrophysics Data System (ADS)
Niemeier, Wolfgang; Tengen, Dieter
2017-06-01
In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.
Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette
2018-05-01
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.
2018-03-01
This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv; Nakano, Aiichiro; Vashishta, Priya; Iyetomi, Hiroshi; Ogata, Shuji; Kouno, Takahisa; Shimojo, Fuyuki; Tsuruta, Kanji; Saini, Subhash;
2002-01-01
A multidisciplinary, collaborative simulation has been performed on a Grid of geographically distributed PC clusters. The multiscale simulation approach seamlessly combines i) atomistic simulation backed on the molecular dynamics (MD) method and ii) quantum mechanical (QM) calculation based on the density functional theory (DFT), so that accurate but less scalable computations are performed only where they are needed. The multiscale MD/QM simulation code has been Grid-enabled using i) a modular, additive hybridization scheme, ii) multiple QM clustering, and iii) computation/communication overlapping. The Gridified MD/QM simulation code has been used to study environmental effects of water molecules on fracture in silicon. A preliminary run of the code has achieved a parallel efficiency of 94% on 25 PCs distributed over 3 PC clusters in the US and Japan, and a larger test involving 154 processors on 5 distributed PC clusters is in progress.
Positive Wigner functions render classical simulation of quantum computation efficient.
Mari, A; Eisert, J
2012-12-07
We show that quantum circuits where the initial state and all the following quantum operations can be represented by positive Wigner functions can be classically efficiently simulated. This is true both for continuous-variable as well as discrete variable systems in odd prime dimensions, two cases which will be treated on entirely the same footing. Noting the fact that Clifford and Gaussian operations preserve the positivity of the Wigner function, our result generalizes the Gottesman-Knill theorem. Our algorithm provides a way of sampling from the output distribution of a computation or a simulation, including the efficient sampling from an approximate output distribution in the case of sampling imperfections for initial states, gates, or measurements. In this sense, this work highlights the role of the positive Wigner function as separating classically efficiently simulable systems from those that are potentially universal for quantum computing and simulation, and it emphasizes the role of negativity of the Wigner function as a computational resource.
Marsalek, Ondrej; Markland, Thomas E
2016-02-07
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.
Classical simulation of quantum many-body systems
NASA Astrophysics Data System (ADS)
Huang, Yichen
Classical simulation of quantum many-body systems is in general a challenging problem for the simple reason that the dimension of the Hilbert space grows exponentially with the system size. In particular, merely encoding a generic quantum many-body state requires an exponential number of bits. However, condensed matter physicists are mostly interested in local Hamiltonians and especially their ground states, which are highly non-generic. Thus, we might hope that at least some physical systems allow efficient classical simulation. Starting with one-dimensional (1D) quantum systems (i.e., the simplest nontrivial case), the first basic question is: Which classes of states have efficient classical representations? It turns out that this question is quantitatively related to the amount of entanglement in the state, for states with "little entanglement'' are well approximated by matrix product states (a data structure that can be manipulated efficiently on a classical computer). At a technical level, the mathematical notion for "little entanglement'' is area law, which has been proved for unique ground states in 1D gapped systems. We establish an area law for constant-fold degenerate ground states in 1D gapped systems and thus explain the effectiveness of matrix-product-state methods in (e.g.) symmetry breaking phases. This result might not be intuitively trivial as degenerate ground states in gapped systems can be long-range correlated. Suppose an efficient classical representation exists. How can one find it efficiently? The density matrix renormalization group is the leading numerical method for computing ground states in 1D quantum systems. However, it is a heuristic algorithm and the possibility that it may fail in some cases cannot be completely ruled out. Recently, a provably efficient variant of the density matrix renormalization group has been developed for frustration-free 1D gapped systems. We generalize this algorithm to all (i.e., possibly frustrated) 1D gapped systems. Note that the ground-state energy of 1D gapless Hamiltonians is computationally intractable even in the presence of translational invariance. It is tempting to extend methods and tools in 1D to two and higher dimensions (2+D), e.g., matrix product states are generalized to tensor network states. Since an area law for entanglement (if formulated properly) implies efficient matrix product state representations in 1D, an interesting question is whether a similar implication holds in 2+D. Roughly speaking, we show that an area law for entanglement (in any reasonable formulation) does not always imply efficient tensor network representations of the ground states of 2+D local Hamiltonians even in the presence of translational invariance. It should be emphasized that this result does not contradict with the common sense that in practice quantum states with more entanglement usually require more space to be stored classically; rather, it demonstrates that the relationship between entanglement and efficient classical representations is still far from being well understood. Excited eigenstates participate in the dynamics of quantum systems and are particularly relevant to the phenomenon of many-body localization (absence of transport at finite temperature in strongly correlated systems). We study the entanglement of excited eigenstates in random spin chains and expect that its singularities coincide with dynamical quantum phase transitions. This expectation is confirmed in the disordered quantum Ising chain using both analytical and numerical methods. Finally, we study the problem of generating ground states (possibly with topological order) in 1D gapped systems using quantum circuits. This is an interesting problem both in theory and in practice. It not only characterizes the essential difference between the entanglement patterns that give rise to trivial and nontrivial topological order, but also quantifies the difficulty of preparing quantum states with a quantum computer (in experiments).
Classical simulation of quantum error correction in a Fibonacci anyon code
NASA Astrophysics Data System (ADS)
Burton, Simon; Brell, Courtney G.; Flammia, Steven T.
2017-02-01
Classically simulating the dynamics of anyonic excitations in two-dimensional quantum systems is likely intractable in general because such dynamics are sufficient to implement universal quantum computation. However, processes of interest for the study of quantum error correction in anyon systems are typically drawn from a restricted class that displays significant structure over a wide range of system parameters. We exploit this structure to classically simulate, and thereby demonstrate the success of, an error-correction protocol for a quantum memory based on the universal Fibonacci anyon model. We numerically simulate a phenomenological model of the system and noise processes on lattice sizes of up to 128 ×128 sites, and find a lower bound on the error-correction threshold of approximately 0.125 errors per edge, which is comparable to those previously known for Abelian and (nonuniversal) non-Abelian anyon models.
NASA Astrophysics Data System (ADS)
Bierwage, A.; Todo, Y.
2017-11-01
The transport of fast ions in a beam-driven JT-60U tokamak plasma subject to resonant magnetohydrodynamic (MHD) mode activity is simulated using the so-called multi-phase method, where 4 ms intervals of classical Monte-Carlo simulations (without MHD) are interlaced with 1 ms intervals of hybrid simulations (with MHD). The multi-phase simulation results are compared to results obtained with continuous hybrid simulations, which were recently validated against experimental data (Bierwage et al., 2017). It is shown that the multi-phase method, in spite of causing significant overshoots in the MHD fluctuation amplitudes, accurately reproduces the frequencies and positions of the dominant resonant modes, as well as the spatial profile and velocity distribution of the fast ions, while consuming only a fraction of the computation time required by the continuous hybrid simulation. The present paper is limited to low-amplitude fluctuations consisting of a few long-wavelength modes that interact only weakly with each other. The success of this benchmark study paves the way for applying the multi-phase method to the simulation of Abrupt Large-amplitude Events (ALE), which were seen in the same JT-60U experiments but at larger time intervals. Possible implications for the construction of reduced models for fast ion transport are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuttis, Hans-Georg; Wang, Xiaoxing
Decomposition methods of the Suzuki-Trotter type of various orders have been derived in different fields. Applying them both to classical ordinary differential equations (ODEs) and quantum systems allows to judge their effectiveness and gives new insights for many body quantum mechanics where reference data are scarce. Further, based on data for 6 × 6 system we conclude that sampling with sign (minus-sign problem) is probably detrimental to the accuracy of fermionic simulations with determinant algorithms.
Current problems in applied mathematics and mathematical physics
NASA Astrophysics Data System (ADS)
Samarskii, A. A.
Papers are presented on such topics as mathematical models in immunology, mathematical problems of medical computer tomography, classical orthogonal polynomials depending on a discrete variable, and boundary layer methods for singular perturbation problems in partial derivatives. Consideration is also given to the computer simulation of supernova explosion, nonstationary internal waves in a stratified fluid, the description of turbulent flows by unsteady solutions of the Navier-Stokes equations, and the reduced Galerkin method for external diffraction problems using the spline approximation of fields.
Path-integral isomorphic Hamiltonian for including nuclear quantum effects in non-adiabatic dynamics
NASA Astrophysics Data System (ADS)
Tao, Xuecheng; Shushkov, Philip; Miller, Thomas F.
2018-03-01
We describe a path-integral approach for including nuclear quantum effects in non-adiabatic chemical dynamics simulations. For a general physical system with multiple electronic energy levels, a corresponding isomorphic Hamiltonian is introduced such that Boltzmann sampling of the isomorphic Hamiltonian with classical nuclear degrees of freedom yields the exact quantum Boltzmann distribution for the original physical system. In the limit of a single electronic energy level, the isomorphic Hamiltonian reduces to the familiar cases of either ring polymer molecular dynamics (RPMD) or centroid molecular dynamics Hamiltonians, depending on the implementation. An advantage of the isomorphic Hamiltonian is that it can easily be combined with existing mixed quantum-classical dynamics methods, such as surface hopping or Ehrenfest dynamics, to enable the simulation of electronically non-adiabatic processes with nuclear quantum effects. We present numerical applications of the isomorphic Hamiltonian to model two- and three-level systems, with encouraging results that include improvement upon a previously reported combination of RPMD with surface hopping in the deep-tunneling regime.
The Quantum Socket: Wiring for Superconducting Qubits - Part 1
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Bejanin, J. H.; Rinehart, J. R.; Bateman, J. D.; Earnest, C. T.; McRae, C. H.; Rohanizadegan, Y.; Shiri, D.; Mariantoni, M.; Penava, B.; Breul, P.; Royak, S.; Zapatka, M.; Fowler, A. G.
Quantum systems with ten superconducting quantum bits (qubits) have been realized, making it possible to show basic quantum error correction (QEC) algorithms. However, a truly scalable architecture has not been developed yet. QEC requires a two-dimensional array of qubits, restricting any interconnection to external classical systems to the third axis. In this talk, we introduce an interconnect solution for solid-state qubits: The quantum socket. The quantum socket employs three-dimensional wires and makes it possible to connect classical electronics with quantum circuits more densely and accurately than methods based on wire bonding. The three-dimensional wires are based on spring-loaded pins engineered to insure compatibility with quantum computing applications. Extensive design work and machining was required, with focus on material quality to prevent magnetic impurities. Microwave simulations were undertaken to optimize the design, focusing on the interface between the micro-connector and an on-chip coplanar waveguide pad. Simulations revealed good performance from DC to 10 GHz and were later confirmed against experimental measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Zixuan; Ratner, Mark A.; Seideman, Tamar, E-mail: t-seideman@northwestern.edu
2014-12-14
We develop a numerical approach for simulating light-induced charge transport dynamics across a metal-molecule-metal conductance junction. The finite-difference time-domain method is used to simulate the plasmonic response of the metal structures. The Huygens subgridding technique, as adapted to Lorentz media, is used to bridge the vastly disparate length scales of the plasmonic metal electrodes and the molecular system, maintaining accuracy. The charge and current densities calculated with classical electrodynamics are transformed to an electronic wavefunction, which is then propagated through the molecular linker via the Heisenberg equations of motion. We focus mainly on development of the theory and exemplify ourmore » approach by a numerical illustration of a simple system consisting of two silver cylinders bridged by a three-site molecular linker. The electronic subsystem exhibits fascinating light driven dynamics, wherein the charge density oscillates at the driving optical frequency, exhibiting also the natural system timescales, and a resonance phenomenon leads to strong conductance enhancement.« less
NASA Astrophysics Data System (ADS)
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Soohaeng; Xantheas, Sotiris S.
Water's function as a universal solvent and its role in mediating several biological functions that are responsible for sustaining life has created tremendous interest in the understanding of its structure at the molecular level.1 Due to the size of the simulation cells and the sampling time needed to compute many macroscopic properties, most of the initial simulations are performed using a classical force field whereas several processes that involve chemistry are subsequently probed with electronic structure based methods. A significant effort has therefore been devoted towards the development of classical force fields for water.2 Clusters of water molecules are usefulmore » in probing the intermolecular interactions at the microscopic level as well as providing information about the subtle energy differences that are associated with different bonding arrangements within a hydrogen bonded network. They moreover render a quantitative picture of the nature and magnitude of the various components of the intermolecular interactions such as exchange, dispersion, induction etc. They can finally serve as a vehicle for the study of the convergence of properties with increasing size.« less
Quantum simulation from the bottom up: the case of rebits
NASA Astrophysics Data System (ADS)
Enshan Koh, Dax; Yuezhen Niu, Murphy; Yoder, Theodore J.
2018-05-01
Typically, quantum mechanics is thought of as a linear theory with unitary evolution governed by the Schrödinger equation. While this is technically true and useful for a physicist, with regards to computation it is an unfortunately narrow point of view. Just as a classical computer can simulate highly nonlinear functions of classical states, so too can the more general quantum computer simulate nonlinear evolutions of quantum states. We detail one particular simulation of nonlinearity on a quantum computer, showing how the entire class of -unitary evolutions (on n qubits) can be simulated using a unitary, real-amplitude quantum computer (consisting of n + 1 qubits in total). These operators can be represented as the sum of a linear and antilinear operator, and add an intriguing new set of nonlinear quantum gates to the toolbox of the quantum algorithm designer. Furthermore, a subgroup of these nonlinear evolutions, called the -Cliffords, can be efficiently classically simulated, by making use of the fact that Clifford operators can simulate non-Clifford (in fact, non-linear) operators. This perspective of using the physical operators that we have to simulate non-physical ones that we do not is what we call bottom-up simulation, and we give some examples of its broader implications.
Feitosa, V P; Gotti, V B; Grohmann, C V; Abuná, G; Correr-Sobrinho, L; Sinhoreti, M A C; Correr, A B
2014-09-01
To evaluate the effects of two methods to simulate physiological pulpal pressure on the dentine bonding performance of two all-in-one adhesives and a two-step self-etch silorane-based adhesive by means of microtensile bond strength (μTBS) and nanoleakage surveys. The self-etch adhesives [G-Bond Plus (GB), Adper Easy Bond (EB) and silorane adhesive (SIL)] were applied to flat deep dentine surfaces from extracted human molars. The restorations were constructed using resin composites Filtek Silorane or Filtek Z350 (3M ESPE). After 24 h using the two methods of simulated pulpal pressure or no pulpal pressure (control groups), the bonded teeth were cut into specimens and submitted to μTBS and silver uptake examination. Results were analysed with two-way anova and Tukey's test (P < 0.05). Both methods of simulated pulpal pressure led statistically similar μTBS for all adhesives. No difference between control and pulpal pressure groups was found for SIL and GB. EB led significant drop (P = 0.002) in bond strength under pulpal pressure. Silver impregnation was increased after both methods of simulated pulpal pressure for all adhesives, and it was similar between the simulated pulpal pressure methods. The innovative method to simulate pulpal pressure behaved similarly to the classic one and could be used as an alternative. The HEMA-free one-step and the two-step self-etch adhesives had acceptable resistance against pulpal pressure, unlike the HEMA-rich adhesive. © 2013 International Endodontic Journal. Published by John Wiley & Sons Ltd.
Quantum-capacity-approaching codes for the detected-jump channel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassl, Markus; Wei Zhaohui; Ji Zhengfeng
2010-12-15
The quantum-channel capacity gives the ultimate limit for the rate at which quantum data can be reliably transmitted through a noisy quantum channel. Degradable quantum channels are among the few channels whose quantum capacities are known. Given the quantum capacity of a degradable channel, it remains challenging to find a practical coding scheme which approaches capacity. Here we discuss code designs for the detected-jump channel, a degradable channel with practical relevance describing the physics of spontaneous decay of atoms with detected photon emission. We show that this channel can be used to simulate a binary classical channel with both erasuresmore » and bit flips. The capacity of the simulated classical channel gives a lower bound on the quantum capacity of the detected-jump channel. When the jump probability is small, it almost equals the quantum capacity. Hence using a classical capacity-approaching code for the simulated classical channel yields a quantum code which approaches the quantum capacity of the detected-jump channel.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schubert, Alexander, E-mail: schubert@irsamc.ups-tlse.fr; Meier, Christoph; Falvo, Cyril
2016-08-07
We present mixed quantum-classical simulations on relaxation and dephasing of vibrationally excited carbon monoxide within a protein environment. The methodology is based on a vibrational surface hopping approach treating the vibrational states of CO quantum mechanically, while all remaining degrees of freedom are described by means of classical molecular dynamics. The CO vibrational states form the “surfaces” for the classical trajectories of protein and solvent atoms. In return, environmentally induced non-adiabatic couplings between these states cause transitions describing the vibrational relaxation from first principles. The molecular dynamics simulation yields a detailed atomistic picture of the energy relaxation pathways, taking themore » molecular structure and dynamics of the protein and its solvent fully into account. Using the ultrafast photolysis of CO in the hemoprotein FixL as an example, we study the relaxation of vibrationally excited CO and evaluate the role of each of the FixL residues forming the heme pocket.« less
Nonclassicality of Temporal Correlations.
Brierley, Stephen; Kosowski, Adrian; Markiewicz, Marcin; Paterek, Tomasz; Przysiężna, Anna
2015-09-18
The results of spacelike separated measurements are independent of distant measurement settings, a property one might call two-way no-signaling. In contrast, timelike separated measurements are only one-way no-signaling since the past is independent of the future but not vice versa. For this reason some temporal correlations that are formally identical to nonclassical spatial correlations can still be modeled classically. We propose a new formulation of Bell's theorem for temporal correlations; namely, we define nonclassical temporal correlations as the ones which cannot be simulated by propagating in time the classical information content of a quantum system given by the Holevo bound. We first show that temporal correlations between results of any projective quantum measurements on a qubit can be simulated classically. Then we present a sequence of general measurements on a single m-level quantum system that cannot be explained by propagating in time an m-level classical system and using classical computers with unlimited memory.
Multinuclear NMR of CaSiO(3) glass: simulation from first-principles.
Pedone, Alfonso; Charpentier, Thibault; Menziani, Maria Cristina
2010-06-21
An integrated computational method which couples classical molecular dynamics simulations with density functional theory calculations is used to simulate the solid-state NMR spectra of amorphous CaSiO(3). Two CaSiO(3) glass models are obtained by shell-model molecular dynamics simulations, successively relaxed at the GGA-PBE level of theory. The calculation of the NMR parameters (chemical shielding and quadrupolar parameters), which are then used to simulate solid-state 1D and 2D-NMR spectra of silicon-29, oxygen-17 and calcium-43, is achieved by the gauge including projector augmented-wave (GIPAW) and the projector augmented-wave (PAW) methods. It is shown that the limitations due to the finite size of the MD models can be overcome using a Kernel Estimation Density (KDE) approach to simulate the spectra since it better accounts for the disorder effects on the NMR parameter distribution. KDE allows reconstructing a smoothed NMR parameter distribution from the MD/GIPAW data. Simulated NMR spectra calculated with the present approach are found to be in excellent agreement with the experimental data. This further validates the CaSiO(3) structural model obtained by MD simulations allowing the inference of relationships between structural data and NMR response. The methods used to simulate 1D and 2D-NMR spectra from MD GIPAW data have been integrated in a package (called fpNMR) freely available on request.
A satellite simulator for TRMM PR applied to climate model simulations
NASA Astrophysics Data System (ADS)
Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.
2017-12-01
Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
Selectivity trend of gas separation through nanoporous graphene
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Hongjun; Chen, Zhongfang; Dai, Sheng
2014-01-29
We demonstrate that porous graphene can efficiently separate gases according to their molecular sizes using molecular dynamic (MD) simulations,. The flux sequence from the classical MD simulation is H 2>CO 2>>N 2>Ar>CH 4, which generally follows the trend in the kinetic diameters. Moreover, this trend is also confirmed from the fluxes based on the computed free energy barriers for gas permeation using the umbrella sampling method and kinetic theory of gases. Both brute-force MD simulations and free-energy calcualtions lead to the flux trend consistent with experiments. Case studies of two compositions of CO 2/N 2 mixtures further demonstrate the separationmore » capability of nanoporous graphene.« less
An Improved SoC Test Scheduling Method Based on Simulated Annealing Algorithm
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Shen, Zhihang; Gao, Huaien; Chen, Bianna; Zheng, Weida; Xiong, Xiaoming
2017-02-01
In this paper, we propose an improved SoC test scheduling method based on simulated annealing algorithm (SA). It is our first to disorganize IP core assignment for each TAM to produce a new solution for SA, allocate TAM width for each TAM using greedy algorithm and calculate corresponding testing time. And accepting the core assignment according to the principle of simulated annealing algorithm and finally attain the optimum solution. Simultaneously, we run the test scheduling experiment with the international reference circuits provided by International Test Conference 2002(ITC’02) and the result shows that our algorithm is superior to the conventional integer linear programming algorithm (ILP), simulated annealing algorithm (SA) and genetic algorithm(GA). When TAM width reaches to 48,56 and 64, the testing time based on our algorithm is lesser than the classic methods and the optimization rates are 30.74%, 3.32%, 16.13% respectively. Moreover, the testing time based on our algorithm is very close to that of improved genetic algorithm (IGA), which is state-of-the-art at present.
Oxygen transport properties estimation by DSMC-CT simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
Coupling DSMC simulations with classical trajectories calculations is emerging as a powerful tool to improve predictive capabilities of computational rarefied gas dynamics. The considerable increase of computational effort outlined in the early application of the method (Koura,1997) can be compensated by running simulations on massively parallel computers. In particular, GPU acceleration has been found quite effective in reducing computing time (Ferrigni,2012; Norman et al.,2013) of DSMC-CT simulations. The aim of the present work is to study rarefied Oxygen flows by modeling binary collisions through an accurate potential energy surface, obtained by molecular beams scattering (Aquilanti, et al.,1999). The accuracy ofmore » the method is assessed by calculating molecular Oxygen shear viscosity and heat conductivity following three different DSMC-CT simulation methods. In the first one, transport properties are obtained from DSMC-CT simulations of spontaneous fluctuation of an equilibrium state (Bruno et al, Phys. Fluids, 23, 093104, 2011). In the second method, the collision trajectory calculation is incorporated in a Monte Carlo integration procedure to evaluate the Taxman’s expressions for the transport properties of polyatomic gases (Taxman,1959). In the third, non-equilibrium zero and one-dimensional rarefied gas dynamic simulations are adopted and the transport properties are computed from the non-equilibrium fluxes of momentum and energy. The three methods provide close values of the transport properties, their estimated statistical error not exceeding 3%. The experimental values are slightly underestimated, the percentage deviation being, again, few percent.« less
On simulations of rarefied vapor flows with condensation
NASA Astrophysics Data System (ADS)
Bykov, Nikolay; Gorbachev, Yuriy; Fyodorov, Stanislav
2018-05-01
Results of the direct simulation Monte Carlo of 1D spherical and 2D axisymmetric expansions into vacuum of condens-ing water vapor are presented. Two models based on the kinetic approach and the size-corrected classical nucleation theory are employed for simulations. The difference in obtained results is discussed and advantages of the kinetic approach in comparison with the modified classical theory are demonstrated. The impact of clusterization on flow parameters is observed when volume fraction of clusters in the expansion region exceeds 5%. Comparison of the simulation data with the experimental results demonstrates good agreement.
NASA Astrophysics Data System (ADS)
Sondak, David; Oberai, Assad
2012-10-01
Novel large eddy simulation (LES) models are developed for incompressible magnetohydrodynamics (MHD). These models include the application of the variational multiscale formulation (VMS) of LES to the equations of incompressible MHD, a new residual-based eddy viscosity model (RBEVM,) and a mixed LES model that combines the strengths of both of these models. The new models result in a consistent numerical method that is relatively simple to implement. A dynamic procedure for determining model coefficients is no longer required. The new LES models are tested on a decaying Taylor-Green vortex generalized to MHD and benchmarked against classical and state-of-the art LES turbulence models as well as direct numerical simulations (DNS). These new models are able to account for the essential MHD physics which is demonstrated via comparisons of energy spectra. We also compare the performance of our models to a DNS simulation by A. Pouquet et al., for which the ratio of DNS modes to LES modes is 262,144. Additionally, we extend these models to a finite element setting in which boundary conditions play a role. A classic problem on which we test these models is turbulent channel flow, which in the case of MHD, is called Hartmann flow.
On the Analysis of Multistep-Out-of-Grid Method for Celestial Mechanics Tasks
NASA Astrophysics Data System (ADS)
Olifer, L.; Choliy, V.
2016-09-01
Occasionally, there is a necessity in high-accurate prediction of celestial body trajectory. The most common way to do that is to solve Kepler's equation analytically or to use Runge-Kutta or Adams integrators to solve equation of motion numerically. For low-orbit satellites, there is a critical need in accounting geopotential and another forces which influence motion. As the result, the right side of equation of motion becomes much bigger, and classical integrators will not be quite effective. On the other hand, there is a multistep-out-of-grid (MOG) method which combines Runge-Kutta and Adams methods. The MOG method is based on using m on-grid values of the solution and n × m off-grid derivative estimations. Such method could provide stable integrators of maximum possible order, O (hm+mn+n-1). The main subject of this research was to implement and analyze the MOG method for solving satellite equation of motion with taking into account Earth geopotential model (ex. EGM2008 (Pavlis at al., 2008)) and with possibility to add other perturbations such as atmospheric drag or solar radiation pressure. Simulations were made for satellites on low orbit and with various eccentricities (from 0.1 to 0.9). Results of the MOG integrator were compared with results of Runge-Kutta and Adams integrators. It was shown that the MOG method has better accuracy than classical ones of the same order and less right-hand value estimations when is working on high orders. That gives it some advantage over "classical" methods.
Non-orthogonal tool/flange and robot/world calibration.
Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim
2012-12-01
For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.
Cerezo, Javier; Aranda, Daniel; Avila Ferrer, Francisco J; Prampolini, Giacomo; Mazzeo, Giuseppe; Longhi, Giovanna; Abbate, Sergio; Santoro, Fabrizio
2018-06-01
We extend a recently proposed mixed quantum/classical method for computing the vibronic electronic circular dichroism (ECD) spectrum of molecules with different conformers, to cases where more than one hindered rotation is present. The method generalizes the standard procedure, based on the simple Boltzmann average of the vibronic spectra of the stable conformers, and includes the contribution of structures that sample all the accessible conformational space. It is applied to the simulation of the ECD spectrum of (S)-2,2,2-trifluoroanthrylethanol, a molecule with easily interconvertible conformers, whose spectrum exhibits a pattern of alternating positive and negative vibronic peaks. Results are in very good agreement with experiment and show that spectra averaged over all the sampled conformational space can deviate significantly from the simple average of the contributions of the stable conformers. The present mixed quantum/classical method is able to capture the effect of the nonlinear dependence of the rotatory strength on the molecular structure and of the anharmonic couplings among the modes responsible for molecular flexibility. Despite its computational cost, the procedure is still affordable and promises to be useful in all cases where the ECD shape arises from a subtle balance between vibronic effects and conformational variety. © 2018 Wiley Periodicals, Inc.
On the behavior of isolated and embedded carbon nano-tubes in a polymeric matrix
NASA Astrophysics Data System (ADS)
Rahimian-Koloor, Seyed Mostafa; Moshrefzadeh-Sani, Hadi; Mehrdad Shokrieh, Mahmood; Majid Hashemianzadeh, Seyed
2018-02-01
In the classical micro-mechanical method, the moduli of the reinforcement and the matrix are used to predict the stiffness of composites. However, using the classical micro-mechanical method to predict the stiffness of CNT/epoxy nanocomposites leads to overestimated results. One of the main reasons for this overestimation is using the stiffness of the isolated CNT and ignoring the CNT nanoscale effect by the method. In the present study the non-equilibrium molecular dynamics simulation was used to consider the influence of CNT length on the stiffness of the nanocomposites through the isothermal-isobaric ensemble. The results indicated that, due to the nanoscale effects, the reinforcing efficiency of the embedded CNT is not constant and decreases with decreasing its length. Based on the results, a relationship was derived, which predicts the effective stiffness of an embedded CNT in terms of its length. It was shown that using this relationship leads to predict more accurate elastic modulus of nanocomposite, which was validated by some experimental counterparts.
NUMERICAL MODELING OF THE COAGULATION AND POROSITY EVOLUTION OF DUST AGGREGATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okuzumi, Satoshi; Sakagami, Masa-aki; Tanaka, Hidekazu, E-mail: satoshi.okuzumi@ax2.ecs.kyoto-u.ac.j
2009-12-20
Porosity evolution of dust aggregates is crucial in understanding dust evolution in protoplanetary disks. In this study, we present useful tools to study the coagulation and porosity evolution of dust aggregates. First, we present a new numerical method for simulating dust coagulation and porosity evolution as an extension of the conventional Smoluchowski equation. This method follows the evolution of the mean porosity for each aggregate mass simultaneously with the evolution of the mass distribution function. This method reproduces the results of previous Monte Carlo simulations with much less computational expense. Second, we propose a new collision model for porous dustmore » aggregates on the basis of our N-body experiments on aggregate collisions. As the first step, we focus on 'hit-and-stick' collisions, which involve neither compression nor fragmentation of aggregates. We first obtain empirical data on porosity changes between the classical limits of ballistic cluster-cluster and particle-cluster aggregation. Using the data, we construct a recipe for the porosity change due to general hit-and-stick collisions as well as formulae for the aerodynamical and collisional cross sections. Our collision model is thus more realistic than a previous model of Ormel et al. based on the classical aggregation limits only. Simple coagulation simulations using the extended Smoluchowski method show that our collision model explains the fractal dimensions of porous aggregates observed in a full N-body simulation and a laboratory experiment. By contrast, similar simulations using the collision model of Ormel et al. result in much less porous aggregates, meaning that this model underestimates the porosity increase upon unequal-sized collisions. Besides, we discover that aggregates at the high-mass end of the distribution can have a considerably small aerodynamical cross section per unit mass compared with aggregates of lower masses. This occurs when aggregates drift under uniform acceleration (e.g., gravity) and their collision is induced by the difference in their terminal velocities. We point out an important implication of this discovery for dust growth in protoplanetary disks.« less
Simulations of the Neutron Gas in the Inner Crust of Neutron Stars
NASA Astrophysics Data System (ADS)
Vandegriff, Elizabeth; Horowitz, Charles; Caplan, Matthew
2017-09-01
Inside neutron stars, the structures known as `nuclear pasta' are found in the crust. This pasta forms near nuclear density as nucleons arrange in spaghetti- or lasagna-like structures to minimize their energy. We run classical molecular dynamics simulations to visualize the geometry of this pasta and study the distribution of nucleons. In the simulations, we observe that the pasta is embedded in a gas of neutrons, which we call the `sauce'. In this work, we developed two methods for determining the density of neutrons in the gas, one which is accurate at low temperatures and a second which justifies an extrapolation at high temperatures. Running simulations with no Coulomb interactions, we find that the neutron density increases linearly with temperature for every proton fraction we simulated. NSF REU Grant PHY-1460882 at Indiana University.
Data Analysis Approaches for the Risk-Informed Safety Margins Characterization Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Alfonsi, Andrea; Maljovec, Daniel P.
2016-09-01
In the past decades, several numerical simulation codes have been employed to simulate accident dynamics (e.g., RELAP5-3D, RELAP-7, MELCOR, MAAP). In order to evaluate the impact of uncertainties into accident dynamics, several stochastic methodologies have been coupled with these codes. These stochastic methods range from classical Monte-Carlo and Latin Hypercube sampling to stochastic polynomial methods. Similar approaches have been introduced into the risk and safety community where stochastic methods (such as RAVEN, ADAPT, MCDET, ADS) have been coupled with safety analysis codes in order to evaluate the safety impact of timing and sequencing of events. These approaches are usually calledmore » Dynamic PRA or simulation-based PRA methods. These uncertainties and safety methods usually generate a large number of simulation runs (database storage may be on the order of gigabytes or higher). The scope of this paper is to present a broad overview of methods and algorithms that can be used to analyze and extract information from large data sets containing time dependent data. In this context, “extracting information” means constructing input-output correlations, finding commonalities, and identifying outliers. Some of the algorithms presented here have been developed or are under development within the RAVEN statistical framework.« less
Evaluating conflation methods using uncertainty modeling
NASA Astrophysics Data System (ADS)
Doucette, Peter; Dolloff, John; Canavosio-Zuzelski, Roberto; Lenihan, Michael; Motsko, Dennis
2013-05-01
The classic problem of computer-assisted conflation involves the matching of individual features (e.g., point, polyline, or polygon vectors) as stored in a geographic information system (GIS), between two different sets (layers) of features. The classical goal of conflation is the transfer of feature metadata (attributes) from one layer to another. The age of free public and open source geospatial feature data has significantly increased the opportunity to conflate such data to create enhanced products. There are currently several spatial conflation tools in the marketplace with varying degrees of automation. An ability to evaluate conflation tool performance quantitatively is of operational value, although manual truthing of matched features is laborious and costly. In this paper, we present a novel methodology that uses spatial uncertainty modeling to simulate realistic feature layers to streamline evaluation of feature matching performance for conflation methods. Performance results are compiled for DCGIS street centerline features.
Effects of tunnelling and asymmetry for system-bath models of electron transfer
NASA Astrophysics Data System (ADS)
Mattiat, Johann; Richardson, Jeremy O.
2018-03-01
We apply the newly derived nonadiabatic golden-rule instanton theory to asymmetric models describing electron-transfer in solution. The models go beyond the usual spin-boson description and have anharmonic free-energy surfaces with different values for the reactant and product reorganization energies. The instanton method gives an excellent description of the behaviour of the rate constant with respect to asymmetry for the whole range studied. We derive a general formula for an asymmetric version of the Marcus theory based on the classical limit of the instanton and find that this gives significant corrections to the standard Marcus theory. A scheme is given to compute this rate based only on equilibrium simulations. We also compare the rate constants obtained by the instanton method with its classical limit to study the effect of tunnelling and other quantum nuclear effects. These quantum effects can increase the rate constant by orders of magnitude.
From transistor to trapped-ion computers for quantum chemistry.
Yung, M-H; Casanova, J; Mezzacapo, A; McClean, J; Lamata, L; Aspuru-Guzik, A; Solano, E
2014-01-07
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology.
Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography
Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.
2017-01-01
We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454
From transistor to trapped-ion computers for quantum chemistry
Yung, M.-H.; Casanova, J.; Mezzacapo, A.; McClean, J.; Lamata, L.; Aspuru-Guzik, A.; Solano, E.
2014-01-01
Over the last few decades, quantum chemistry has progressed through the development of computational methods based on modern digital computers. However, these methods can hardly fulfill the exponentially-growing resource requirements when applied to large quantum systems. As pointed out by Feynman, this restriction is intrinsic to all computational models based on classical physics. Recently, the rapid advancement of trapped-ion technologies has opened new possibilities for quantum control and quantum simulations. Here, we present an efficient toolkit that exploits both the internal and motional degrees of freedom of trapped ions for solving problems in quantum chemistry, including molecular electronic structure, molecular dynamics, and vibronic coupling. We focus on applications that go beyond the capacity of classical computers, but may be realizable on state-of-the-art trapped-ion systems. These results allow us to envision a new paradigm of quantum chemistry that shifts from the current transistor to a near-future trapped-ion-based technology. PMID:24395054
ERIC Educational Resources Information Center
Sinharay, Sandip
2010-01-01
Recently, there has been an increasing level of interest in subscores for their potential diagnostic value. Haberman (2008) suggested a method based on classical test theory to determine whether subscores have added value over total scores. This paper provides a literature review and reports when subscores were found to have added value for…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu
The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution ofmore » dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.« less
Many-body kinetics of dynamic nuclear polarization by the cross effect
NASA Astrophysics Data System (ADS)
Karabanov, A.; Wiśniewski, D.; Raimondi, F.; Lesanovsky, I.; Köckenberger, W.
2018-03-01
Dynamic nuclear polarization (DNP) is an out-of-equilibrium method for generating nonthermal spin polarization which provides large signal enhancements in modern diagnostic methods based on nuclear magnetic resonance. A particular instance is cross-effect DNP, which involves the interaction of two coupled electrons with the nuclear spin ensemble. Here we develop a theory for this important DNP mechanism and show that the nonequilibrium nuclear polarization buildup is effectively driven by three-body incoherent Markovian dissipative processes involving simultaneous state changes of two electrons and one nucleus. We identify different parameter regimes for effective polarization transfer and discuss under which conditions the polarization dynamics can be simulated by classical kinetic Monte Carlo methods. Our theoretical approach allows simulations of the polarization dynamics on an individual spin level for ensembles consisting of hundreds of nuclear spins. The insight obtained by these simulations can be used to find optimal experimental conditions for cross-effect DNP and to design tailored radical systems that provide optimal DNP efficiency.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
Tikhonov, Denis S; Sharapa, Dmitry I; Schwabedissen, Jan; Rybkin, Vladimir V
2016-10-12
In this study, we investigate the ability of classical molecular dynamics (MD) and Monte-Carlo (MC) simulations for modeling the intramolecular vibrational motion. These simulations were used to compute thermally-averaged geometrical structures and infrared vibrational intensities for a benchmark set previously studied by gas electron diffraction (GED): CS 2 , benzene, chloromethylthiocyanate, pyrazinamide and 9,12-I 2 -1,2-closo-C 2 B 10 H 10 . The MD sampling of NVT ensembles was performed using chains of Nose-Hoover thermostats (NH) as well as the generalized Langevin equation thermostat (GLE). The performance of the theoretical models based on the classical MD and MC simulations was compared with the experimental data and also with the alternative computational techniques: a conventional approach based on the Taylor expansion of potential energy surface, path-integral MD and MD with quantum-thermal bath (QTB) based on the generalized Langevin equation (GLE). A straightforward application of the classical simulations resulted, as expected, in poor accuracy of the calculated observables due to the complete neglect of quantum effects. However, the introduction of a posteriori quantum corrections significantly improved the situation. The application of these corrections for MD simulations of the systems with large-amplitude motions was demonstrated for chloromethylthiocyanate. The comparison of the theoretical vibrational spectra has revealed that the GLE thermostat used in this work is not applicable for this purpose. On the other hand, the NH chains yielded reasonably good results.
Is social projection based on simulation or theory? Why new methods are needed for differentiating
Bazinger, Claudia; Kühberger, Anton
2012-01-01
The literature on social cognition reports many instances of a phenomenon titled ‘social projection’ or ‘egocentric bias’. These terms indicate egocentric predictions, i.e., an over-reliance on the self when predicting the cognition, emotion, or behavior of other people. The classic method to diagnose egocentric prediction is to establish high correlations between our own and other people's cognition, emotion, or behavior. We argue that this method is incorrect because there is a different way to come to a correlation between own and predicted states, namely, through the use of theoretical knowledge. Thus, the use of correlational measures is not sufficient to identify the source of social predictions. Based on the distinction between simulation theory and theory theory, we propose the following alternative methods for inferring prediction strategies: independent vs. juxtaposed predictions, the use of ‘hot’ mental processes, and the use of participants’ self-reports. PMID:23209342
Richard, David; Speck, Thomas
2018-03-28
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
NASA Astrophysics Data System (ADS)
Richard, David; Speck, Thomas
2018-03-01
We investigate the kinetics and the free energy landscape of the crystallization of hard spheres from a supersaturated metastable liquid though direct simulations and forward flux sampling. In this first paper, we describe and test two different ways to reconstruct the free energy barriers from the sampled steady state probability distribution of cluster sizes without sampling the equilibrium distribution. The first method is based on mean first passage times, and the second method is based on splitting probabilities. We verify both methods for a single particle moving in a double-well potential. For the nucleation of hard spheres, these methods allow us to probe a wide range of supersaturations and to reconstruct the kinetics and the free energy landscape from the same simulation. Results are consistent with the scaling predicted by classical nucleation theory although a quantitative fit requires a rather large effective interfacial tension.
NASA Astrophysics Data System (ADS)
Lv, X.; Zhao, Y.; Huang, X. Y.; Xia, G. H.; Su, X. H.
2007-07-01
A new three-dimensional (3D) matrix-free implicit unstructured multigrid finite volume (FV) solver for structural dynamics is presented in this paper. The solver is first validated using classical 2D and 3D cantilever problems. It is shown that very accurate predictions of the fundamental natural frequencies of the problems can be obtained by the solver with fast convergence rates. This method has been integrated into our existing FV compressible solver [X. Lv, Y. Zhao, et al., An efficient parallel/unstructured-multigrid preconditioned implicit method for simulating 3d unsteady compressible flows with moving objects, Journal of Computational Physics 215(2) (2006) 661-690] based on the immersed membrane method (IMM) [X. Lv, Y. Zhao, et al., as mentioned above]. Results for the interaction between the fluid and an immersed fixed-free cantilever are also presented to demonstrate the potential of this integrated fluid-structure interaction approach.
Color dithering methods for LEGO-like 3D printing
NASA Astrophysics Data System (ADS)
Sun, Pei-Li; Sie, Yuping
2015-01-01
Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.
Dillenseger, Jean-Louis; Esneault, Simon; Garnier, Carole
2008-01-01
This paper describes a modeling method of the tissue temperature evolution over time in hyperthermia. More precisely, this approach is used to simulate the hepatocellular carcinoma curative treatment by a percutaneous high intensity ultrasound surgery. The tissue temperature evolution over time is classically described by Pennes' bioheat transfer equation which is generally solved by a finite difference method. In this paper we will present a method where the bioheat transfer equation can be algebraically solved after a Fourier transformation over the space coordinates. The implementation and boundary conditions of this method will be shown and compared with the finite difference method.
Ab initio molecular simulations on specific interactions between amyloid beta and monosaccharides
NASA Astrophysics Data System (ADS)
Nomura, Kazuya; Okamoto, Akisumi; Yano, Atsushi; Higai, Shin'ichi; Kondo, Takashi; Kamba, Seiji; Kurita, Noriyuki
2012-09-01
Aggregation of amyloid β (Aβ) peptides, which is a key pathogenetic event in Alzheimer's disease, can be caused by cell-surface saccharides. We here investigated stable structures of the solvated complexes of Aβ with some types of monosaccharides using molecular simulations based on protein-ligand docking and classical molecular mechanics methods. Moreover, the specific interactions between Aβ and the monosaccharides were elucidated at an electronic level by ab initio fragment molecular orbital calculations. Based on the results, we proposed which type of monosaccharide prefers to have large binding affinity to Aβ and inhibit the Aβ aggregation.
PyRETIS: A well-done, medium-sized python library for rare events.
Lervik, Anders; Riccardi, Enrico; van Erp, Titus S
2017-10-30
Transition path sampling techniques are becoming common approaches in the study of rare events at the molecular scale. More efficient methods, such as transition interface sampling (TIS) and replica exchange transition interface sampling (RETIS), allow the investigation of rare events, for example, chemical reactions and structural/morphological transitions, in a reasonable computational time. Here, we present PyRETIS, a Python library for performing TIS and RETIS simulations. PyRETIS directs molecular dynamics (MD) simulations in order to sample rare events with unbiased dynamics. PyRETIS is designed to be easily interfaced with any molecular simulation package and in the present release, it has been interfaced with GROMACS and CP2K, for classical and ab initio MD simulations, respectively. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Method for construction of a biased potential for hyperdynamic simulation of atomic systems
NASA Astrophysics Data System (ADS)
Duda, E. V.; Kornich, G. V.
2017-10-01
An approach to constructing a biased potential for hyperdynamic simulation of atomic systems is considered. Using this approach, the diffusion of an atom adsorbed on the surface of a two-dimensional crystal and a vacancy in the bulk of the crystal are simulated. The influence of the variation in the potential barriers due to thermal vibrations of atoms on the results of calculations is discussed. It is shown that the bias of the potential in the hyperdynamic simulation makes it possible to obtain statistical samples of transitions of atomic systems between states, similar to those given by classical molecular dynamics. However, hyperdynamics significantly accelerates computations in comparison with molecular dynamics in the case of temperature-activated transitions and the associated processes in atomic systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu
Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less
Brembs, Björn; Heisenberg, Martin
2000-01-01
Ever since learning and memory have been studied experimentally, the relationship between operant and classical conditioning has been controversial. Operant conditioning is any form of conditioning that essentially depends on the animal's behavior. It relies on operant behavior. A motor output is called operant if it controls a sensory variable. The Drosophila flight simulator, in which the relevant behavior is a single motor variable (yaw torque), fully separates the operant and classical components of a complex conditioning task. In this paradigm a tethered fly learns, operantly or classically, to prefer and avoid certain flight orientations in relation to the surrounding panorama. Yaw torque is recorded and, in the operant mode, controls the panorama. Using a yoked control, we show that classical pattern learning necessitates more extensive training than operant pattern learning. We compare in detail the microstructure of yaw torque after classical and operant training but find no evidence for acquired behavioral traits after operant conditioning that might explain this difference. We therefore conclude that the operant behavior has a facilitating effect on the classical training. In addition, we show that an operantly learned stimulus is successfully transferred from the behavior of the training to a different behavior. This result unequivocally demonstrates that during operant conditioning classical associations can be formed. PMID:10753977
Brembs, B; Heisenberg, M
2000-01-01
Ever since learning and memory have been studied experimentally, the relationship between operant and classical conditioning has been controversial. Operant conditioning is any form of conditioning that essentially depends on the animal's behavior. It relies on operant behavior. A motor output is called operant if it controls a sensory variable. The Drosophila flight simulator, in which the relevant behavior is a single motor variable (yaw torque), fully separates the operant and classical components of a complex conditioning task. In this paradigm a tethered fly learns, operantly or classically, to prefer and avoid certain flight orientations in relation to the surrounding panorama. Yaw torque is recorded and, in the operant mode, controls the panorama. Using a yoked control, we show that classical pattern learning necessitates more extensive training than operant pattern learning. We compare in detail the microstructure of yaw torque after classical and operant training but find no evidence for acquired behavioral traits after operant conditioning that might explain this difference. We therefore conclude that the operant behavior has a facilitating effect on the classical training. In addition, we show that an operantly learned stimulus is successfully transferred from the behavior of the training to a different behavior. This result unequivocally demonstrates that during operant conditioning classical associations can be formed.
Driven topological systems in the classical limit
NASA Astrophysics Data System (ADS)
Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel
2017-03-01
Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.
Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Zheng, Steven; Suaning, Gregg J
2014-01-01
Simulated prosthetic vision (SPV) in normally sighted subjects is an established way of investigating the prospective efficacy of visual prosthesis designs in visually guided tasks such as mobility. To perform meaningful SPV mobility studies in computer-based environments, a credible representation of both the virtual scene to navigate and the experienced artificial vision has to be established. It is therefore prudent to make optimal use of existing hardware and software solutions when establishing a testing framework. The authors aimed at improving the realism and immersion of SPV by integrating state-of-the-art yet low-cost consumer technology. The feasibility of body motion tracking to control movement in photo-realistic virtual environments was evaluated in a pilot study. Five subjects were recruited and performed an obstacle avoidance and wayfinding task using either keyboard and mouse, gamepad or Kinect motion tracking. Walking speed and collisions were analyzed as basic measures for task performance. Kinect motion tracking resulted in lower performance as compared to classical input methods, yet results were more uniform across vision conditions. The chosen framework was successfully applied in a basic virtual task and is suited to realistically simulate real-world scenes under SPV in mobility research. Classical input peripherals remain a feasible and effective way of controlling the virtual movement. Motion tracking, despite its limitations and early state of implementation, is intuitive and can eliminate between-subject differences due to familiarity to established input methods.
A simple method for simulating wind profiles in the boundary layer of tropical cyclones
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; ...
2016-11-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method alsomore » requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Lastly, temporal spectra from LES produce an inertial subrange for frequencies ≳0.1 Hz, but only when the horizontal grid spacing ≲20 m.« less
A Simple Method for Simulating Wind Profiles in the Boundary Layer of Tropical Cyclones
NASA Astrophysics Data System (ADS)
Bryan, George H.; Worsnop, Rochelle P.; Lundquist, Julie K.; Zhang, Jun A.
2017-03-01
A method to simulate characteristics of wind speed in the boundary layer of tropical cyclones in an idealized manner is developed and evaluated. The method can be used in a single-column modelling set-up with a planetary boundary-layer parametrization, or within large-eddy simulations (LES). The key step is to include terms in the horizontal velocity equations representing advection and centrifugal acceleration in tropical cyclones that occurs on scales larger than the domain size. Compared to other recently developed methods, which require two input parameters (a reference wind speed, and radius from the centre of a tropical cyclone) this new method also requires a third input parameter: the radial gradient of reference wind speed. With the new method, simulated wind profiles are similar to composite profiles from dropsonde observations; in contrast, a classic Ekman-type method tends to overpredict inflow-layer depth and magnitude, and two recently developed methods for tropical cyclone environments tend to overpredict near-surface wind speed. When used in LES, the new technique produces vertical profiles of total turbulent stress and estimated eddy viscosity that are similar to values determined from low-level aircraft flights in tropical cyclones. Temporal spectra from LES produce an inertial subrange for frequencies ≳ 0.1 Hz, but only when the horizontal grid spacing ≲ 20 m.
Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne
2016-01-05
In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.
NASA Technical Reports Server (NTRS)
Al-Saadi, Jassim A.
1993-01-01
A computational simulation of a transonic wind tunnel test section with longitudinally slotted walls is developed and described herein. The nonlinear slot model includes dynamic pressure effects and a plenum pressure constraint, and each slot is treated individually. The solution is performed using a finite-difference method that solves an extended transonic small disturbance equation. The walls serve as the outer boundary conditions in the relaxation technique, and an interaction procedure is used at the slotted walls. Measured boundary pressures are not required to establish the wall conditions but are currently used to assess the accuracy of the simulation. This method can also calculate a free-air solution as well as solutions that employ the classical homogeneous wall conditions. The simulation is used to examine two commercial transport aircraft models at a supercritical Mach number for zero-lift and cruise conditions. Good agreement between measured and calculated wall pressures is obtained for the model geometries and flow conditions examined herein. Some localized disagreement is noted, which is attributed to improper simulation of viscous effects in the slots.
Pisutha-Arnond, N; Chan, V W L; Iyer, M; Gavini, V; Thornton, K
2013-01-01
We introduce a new approach to represent a two-body direct correlation function (DCF) in order to alleviate the computational demand of classical density functional theory (CDFT) and enhance the predictive capability of the phase-field crystal (PFC) method. The approach utilizes a rational function fit (RFF) to approximate the two-body DCF in Fourier space. We use the RFF to show that short-wavelength contributions of the two-body DCF play an important role in determining the thermodynamic properties of materials. We further show that using the RFF to empirically parametrize the two-body DCF allows us to obtain the thermodynamic properties of solids and liquids that agree with the results of CDFT simulations with the full two-body DCF without incurring significant computational costs. In addition, the RFF can also be used to improve the representation of the two-body DCF in the PFC method. Last, the RFF allows for a real-space reformulation of the CDFT and PFC method, which enables descriptions of nonperiodic systems and the use of nonuniform and adaptive grids.
Quantum dynamical simulations of local field enhancement in metal nanoparticles.
Negre, Christian F A; Perassi, Eduardo M; Coronado, Eduardo A; Sánchez, Cristián G
2013-03-27
Field enhancements (Γ) around small Ag nanoparticles (NPs) are calculated using a quantum dynamical simulation formalism and the results are compared with electrodynamic simulations using the discrete dipole approximation (DDA) in order to address the important issue of the intrinsic atomistic structure of NPs. Quite remarkably, in both quantum and classical approaches the highest values of Γ are located in the same regions around single NPs. However, by introducing a complete atomistic description of the metallic NPs in optical simulations, a different pattern of the Γ distribution is obtained. Knowing the correct pattern of the Γ distribution around NPs is crucial for understanding the spectroscopic features of molecules inside hot spots. The enhancement produced by surface plasmon coupling is studied by using both approaches in NP dimers for different inter-particle distances. The results show that the trend of the variation of Γ versus inter-particle distance is different for classical and quantum simulations. This difference is explained in terms of a charge transfer mechanism that cannot be obtained with classical electrodynamics. Finally, time dependent distribution of the enhancement factor is simulated by introducing a time dependent field perturbation into the Hamiltonian, allowing an assessment of the localized surface plasmon resonance quantum dynamics.
NASA Astrophysics Data System (ADS)
Dumitrica, Traian; Hourahine, Ben; Aradi, Balint; Frauenheim, Thomas
We discus the coupling of the objective boundary conditions into the SCC density functional-based tight binding code DFTB+. The implementation is enabled by a generalization to the helical case of the classical Ewald method, specifically by Ewald-like formulas that do not rely on a unit cell with translational symmetry. The robustness of the method in addressing complex hetero-nuclear nano- and bio-fibrous systems is demonstrated with illustrative simulations on a helical boron nitride nanotube, a screw dislocated zinc oxide nanowire, and an ideal double-strand DNA. Work supported by NSF CMMI 1332228.
NASA Astrophysics Data System (ADS)
Flores, P.; Duchêne, L.; Lelotte, T.; Bouffioux, C.; El Houdaigui, F.; Van Bael, A.; He, S.; Duflou, J.; Habraken, A. M.
2005-08-01
The bi-axial experimental equipment developed by Flores enables to perform Baushinger shear tests and successive or simultaneous simple shear tests and plane-strain tests. Such experiments and classical tensile tests investigate the material behavior in order to identify the yield locus and the hardening models. With tests performed on two steel grades, the methods applied to identify classical yield surfaces such as Hill or Hosford ones as well as isotropic Swift type hardening or kinematic Armstrong-Frederick hardening models are explained. Comparison with the Taylor-Bishop-Hill yield locus is also provided. The effect of both yield locus and hardening model choice will be presented for two applications: Single Point Incremental Forming (SPIF) and a cup deep drawing.
Discrete and continuum modelling of soil cutting
NASA Astrophysics Data System (ADS)
Coetzee, C. J.
2014-12-01
Both continuum and discrete methods are used to investigate the soil cutting process. The Discrete Element Method ( dem) is used for the discrete modelling and the Material-Point Method ( mpm) is used for continuum modelling. M pmis a so-called particle method or meshless finite element method. Standard finite element methods have difficulty in modelling the entire cutting process due to large displacements and deformation of the mesh. The use of meshless methods overcomes this problem. M pm can model large deformations, frictional contact at the soil-tool interface, and dynamic effects (inertia forces). In granular materials the discreteness of the system is often important and rotational degrees of freedom are active, which might require enhanced theoretical approaches like polar continua. In polar continuum theories, the material points are considered to possess orientations. A material point has three degrees-of-freedom for rigid rotations, in addition to the three classic translational degrees-of-freedom. The Cosserat continuum is the most transparent and straightforward extension of the nonpolar (classic) continuum. Two-dimensional dem and mpm (polar and nonpolar) simulations of the cutting problem are compared to experiments. The drag force and flow patterns are compared using cohesionless corn grains as material. The corn macro (continuum) and micro ( dem) properties were obtained from shear and oedometer tests. Results show that the dilatancy angle plays a significant role in the flow of material but has less of an influence on the draft force. Nonpolar mpm is the most accurate in predicting blade forces, blade-soil interface stresses and the position and orientation of shear bands. Polar mpm fails in predicting the orientation of the shear band, but is less sensitive to mesh size and mesh orientation compared to nonpolar mpm. dem simulations show less material dilation than observed during experiments.
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
Validation of Bayesian analysis of compartmental kinetic models in medical imaging.
Sitek, Arkadiusz; Li, Quanzheng; El Fakhri, Georges; Alpert, Nathaniel M
2016-10-01
Kinetic compartmental analysis is frequently used to compute physiologically relevant quantitative values from time series of images. In this paper, a new approach based on Bayesian analysis to obtain information about these parameters is presented and validated. The closed-form of the posterior distribution of kinetic parameters is derived with a hierarchical prior to model the standard deviation of normally distributed noise. Markov chain Monte Carlo methods are used for numerical estimation of the posterior distribution. Computer simulations of the kinetics of F18-fluorodeoxyglucose (FDG) are used to demonstrate drawing statistical inferences about kinetic parameters and to validate the theory and implementation. Additionally, point estimates of kinetic parameters and covariance of those estimates are determined using the classical non-linear least squares approach. Posteriors obtained using methods proposed in this work are accurate as no significant deviation from the expected shape of the posterior was found (one-sided P>0.08). It is demonstrated that the results obtained by the standard non-linear least-square methods fail to provide accurate estimation of uncertainty for the same data set (P<0.0001). The results of this work validate new methods for a computer simulations of FDG kinetics. Results show that in situations where the classical approach fails in accurate estimation of uncertainty, Bayesian estimation provides an accurate information about the uncertainties in the parameters. Although a particular example of FDG kinetics was used in the paper, the methods can be extended for different pharmaceuticals and imaging modalities. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Fermion-to-qubit mappings with varying resource requirements for quantum simulation
NASA Astrophysics Data System (ADS)
Steudtner, Mark; Wehner, Stephanie
2018-06-01
The mapping of fermionic states onto qubit states, as well as the mapping of fermionic Hamiltonian into quantum gates enables us to simulate electronic systems with a quantum computer. Benefiting the understanding of many-body systems in chemistry and physics, quantum simulation is one of the great promises of the coming age of quantum computers. Interestingly, the minimal requirement of qubits for simulating Fermions seems to be agnostic of the actual number of particles as well as other symmetries. This leads to qubit requirements that are well above the minimal requirements as suggested by combinatorial considerations. In this work, we develop methods that allow us to trade-off qubit requirements against the complexity of the resulting quantum circuit. We first show that any classical code used to map the state of a fermionic Fock space to qubits gives rise to a mapping of fermionic models to quantum gates. As an illustrative example, we present a mapping based on a nonlinear classical error correcting code, which leads to significant qubit savings albeit at the expense of additional quantum gates. We proceed to use this framework to present a number of simpler mappings that lead to qubit savings with a more modest increase in gate difficulty. We discuss the role of symmetries such as particle conservation, and savings that could be obtained if an experimental platform could easily realize multi-controlled gates.
Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotton, Stephen J.; Miller, William H., E-mail: millerwh@berkeley.edu
A recently described symmetrical windowing methodology [S. J. Cotton and W. H. Miller, J. Phys. Chem. A 117, 7190 (2013)] for quasi-classical trajectory simulations is applied here to the Meyer-Miller [H.-D. Meyer and W. H. Miller, J. Chem. Phys. 70, 3214 (1979)] model for the electronic degrees of freedom in electronically non-adiabatic dynamics. Results generated using this classical approach are observed to be in very good agreement with accurate quantum mechanical results for a variety of test applications, including problems where coherence effects are significant such as the challenging asymmetric spin-boson system.
NASA Astrophysics Data System (ADS)
Baidakov, Vladimir G.
2016-02-01
The process of bubble nucleation in a Lennard-Jones (LJ) liquid is studied by molecular dynamics (MD) simulation. The bubble nucleation rate J is determined by the mean life-time method at temperatures above that of the triple point in the region of negative pressures. The results of simulation are compared with classical nucleation theory (CNT) and modified classical nucleation theory (MCNT), in which the work of formation of a critical bubble is determined in the framework of the van der Waals-Cahn-Hilliard gradient theory (GT). It has been found that the values of J obtained in MD simulation systematically exceed the data of CNT, and this excess in the nucleation rate reaches 8-10 orders of magnitude close to the triple point temperature. The results of MCNT are in satisfactory agreement with the data of MD simulation. To describe the properties of vapor-phase nuclei in the framework of GT, an equation of state has been built up which describes stable, metastable and labile regions of LJ fluids. The surface tension of critical bubbles γ has been found from CNT and data of MD simulation as a function of the radius of curvature of the surface of tension R*. The dependence γ(R*) has also been calculated from GT. The Tolman length has been determined, which is negative and in modulus equal to ≈(0.1 - 0.2) σ. The paper discusses the applicability of the Tolman formula to the description of the properties of critical nuclei in nucleation.
Dudley, Peter N; Bonazza, Riccardo; Porter, Warren P
2013-07-01
Animal momentum and heat transfer analysis has historically used direct animal measurements or approximations to calculate drag and heat transfer coefficients. Research can now use modern 3D rendering and computational fluid dynamics software to simulate animal-fluid interactions. Key questions are the level of agreement between simulations and experiments and how superior they are to classical approximations. In this paper we compared experimental and simulated heat transfer and drag calculations on a scale model solid aluminum African elephant casting. We found good agreement between experimental and simulated data and large differences from classical approximations. We used the simulation results to calculate coefficients for heat transfer and drag of the elephant geometry. Copyright © 2013 Wiley Periodicals, Inc.
Spread-Spectrum Carrier Estimation With Unknown Doppler Shift
NASA Technical Reports Server (NTRS)
DeLeon, Phillip L.; Scaife, Bradley J.
1998-01-01
We present a method for the frequency estimation of a BPSK modulated, spread-spectrum carrier with unknown Doppler shift. The approach relies on a classic periodogram in conjunction with a spectral matched filter. Simulation results indicate accurate carrier estimation with processing gains near 40. A DSP-based prototype has been implemented for real-time carrier estimation for use in New Mexico State University's proposal for NASA's Demand Assignment Multiple Access service.
Classical and quantum simulations of warm dense carbon
NASA Astrophysics Data System (ADS)
Whitley, Heather; Sanchez, David; Hamel, Sebastien; Correa, Alfredo; Benedict, Lorin
We have applied classical and DFT-based molecular dynamics (MD) simulations to study the equation of state of carbon in the warm dense matter regime (ρ = 3.7 g/cc, 0.86 eV
Topics in quantum cryptography, quantum error correction, and channel simulation
NASA Astrophysics Data System (ADS)
Luo, Zhicheng
In this thesis, we mainly investigate four different topics: efficiently implementable codes for quantum key expansion [51], quantum error-correcting codes based on privacy amplification [48], private classical capacity of quantum channels [44], and classical channel simulation with quantum side information [49, 50]. For the first topic, we propose an efficiently implementable quantum key expansion protocol, capable of increasing the size of a pre-shared secret key by a constant factor. Previously, the Shor-Preskill proof [64] of the security of the Bennett-Brassard 1984 (BB84) [6] quantum key distribution protocol relied on the theoretical existence of good classical error-correcting codes with the "dual-containing" property. But the explicit and efficiently decodable construction of such codes is unknown. We show that we can lift the dual-containing constraint by employing the non-dual-containing codes with excellent performance and efficient decoding algorithms. For the second topic, we propose a construction of Calderbank-Shor-Steane (CSS) [19, 68] quantum error-correcting codes, which are originally based on pairs of mutually dual-containing classical codes, by combining a classical code with a two-universal hash function. We show, using the results of Renner and Koenig [57], that the communication rates of such codes approach the hashing bound on tensor powers of Pauli channels in the limit of large block-length. For the third topic, we prove a regularized formula for the secret key assisted capacity region of a quantum channel for transmitting private classical information. This result parallels the work of Devetak on entanglement assisted quantum communication capacity. This formula provides a new family protocol, the private father protocol, under the resource inequality framework that includes the private classical communication without the assisted secret keys as a child protocol. For the fourth topic, we study and solve the problem of classical channel simulation with quantum side information at the receiver. Our main theorem has two important corollaries: rate-distortion theory with quantum side information and common randomness distillation. Simple proofs of achievability of classical multi-terminal source coding problems can be made via a unified approach using the channel simulation theorem as building blocks. The fully quantum generalization of the problem is also conjectured with outer and inner bounds on the achievable rate pairs.
Sakurai, Atsunori; Tanimura, Yoshitaka
2011-04-28
To investigate the role of quantum effects in vibrational spectroscopies, we have carried out numerically exact calculations of linear and nonlinear response functions for an anharmonic potential system nonlinearly coupled to a harmonic oscillator bath. Although one cannot carry out the quantum calculations of the response functions with full molecular dynamics (MD) simulations for a realistic system which consists of many molecules, it is possible to grasp the essence of the quantum effects on the vibrational spectra by employing a model Hamiltonian that describes an intra- or intermolecular vibrational motion in a condensed phase. The present model fully includes vibrational relaxation, while the stochastic model often used to simulate infrared spectra does not. We have employed the reduced quantum hierarchy equations of motion approach in the Wigner space representation to deal with nonperturbative, non-Markovian, and nonsecular system-bath interactions. Taking the classical limit of the hierarchy equations of motion, we have obtained the classical equations of motion that describe the classical dynamics under the same physical conditions as in the quantum case. By comparing the classical and quantum mechanically calculated linear and multidimensional spectra, we found that the profiles of spectra for a fast modulation case were similar, but different for a slow modulation case. In both the classical and quantum cases, we identified the resonant oscillation peak in the spectra, but the quantum peak shifted to the red compared with the classical one if the potential is anharmonic. The prominent quantum effect is the 1-2 transition peak, which appears only in the quantum mechanically calculated spectra as a result of anharmonicity in the potential or nonlinearity of the system-bath coupling. While the contribution of the 1-2 transition is negligible in the fast modulation case, it becomes important in the slow modulation case as long as the amplitude of the frequency fluctuation is small. Thus, we observed a distinct difference between the classical and quantum mechanically calculated multidimensional spectra in the slow modulation case where spectral diffusion plays a role. This fact indicates that one may not reproduce the experimentally obtained multidimensional spectrum for high-frequency vibrational modes based on classical molecular dynamics simulations if the modulation that arises from surrounding molecules is weak and slow. A practical way to overcome the difference between the classical and quantum simulations was discussed.
The ambiguity of simplicity in quantum and classical simulation
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Mahoney, John R.; Crutchfield, James P.
2017-04-01
A system's perceived simplicity depends on whether it is represented classically or quantally. This is not so surprising, as classical and quantum physics are descriptive frameworks built on different assumptions that capture, emphasize, and express different properties and mechanisms. What is surprising is that, as we demonstrate, simplicity is ambiguous: the relative simplicity between two systems can change sign when moving between classical and quantum descriptions. Here, we associate simplicity with small model-memory. We see that the notions of absolute physical simplicity at best form a partial, not a total, order. This suggests that appeals to principles of physical simplicity, via Ockham's Razor or to the ;elegance; of competing theories, may be fundamentally subjective. Recent rapid progress in quantum computation and quantum simulation suggest that the ambiguity of simplicity will strongly impact statistical inference and, in particular, model selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William H., E-mail: millerwh@berkeley.edu; Cotton, Stephen J., E-mail: StephenJCotton47@gmail.com
2015-04-07
It is noted that the recently developed symmetrical quasi-classical (SQC) treatment of the Meyer-Miller (MM) model for the simulation of electronically non-adiabatic dynamics provides a good description of detailed balance, even though the dynamics which results from the classical MM Hamiltonian is “Ehrenfest dynamics” (i.e., the force on the nuclei is an instantaneous coherent average over all electronic states). This is seen to be a consequence of the SQC windowing methodology for “processing” the results of the trajectory calculation. For a particularly simple model discussed here, this is shown to be true regardless of the choice of windowing function employedmore » in the SQC model, and for a more realistic full classical molecular dynamics simulation, it is seen to be maintained correctly for very long time.« less
Lemkul, Justin A; Roux, Benoît; van der Spoel, David; MacKerell, Alexander D
2015-07-15
Explicit treatment of electronic polarization in empirical force fields used for molecular dynamics simulations represents an important advancement in simulation methodology. A straightforward means of treating electronic polarization in these simulations is the inclusion of Drude oscillators, which are auxiliary, charge-carrying particles bonded to the cores of atoms in the system. The additional degrees of freedom make these simulations more computationally expensive relative to simulations using traditional fixed-charge (additive) force fields. Thus, efficient tools are needed for conducting these simulations. Here, we present the implementation of highly scalable algorithms in the GROMACS simulation package that allow for the simulation of polarizable systems using extended Lagrangian dynamics with a dual Nosé-Hoover thermostat as well as simulations using a full self-consistent field treatment of polarization. The performance of systems of varying size is evaluated, showing that the present code parallelizes efficiently and is the fastest implementation of the extended Lagrangian methods currently available for simulations using the Drude polarizable force field. © 2015 Wiley Periodicals, Inc.
Application of simple negative feedback model for avalanche photodetectors investigation
NASA Astrophysics Data System (ADS)
Kushpil, V. V.
2009-10-01
A simple negative feedback model based on Miller's formula is used to investigate the properties of Avalanche Photodetectors (APDs). The proposed method can be applied to study classical APD as well as new type of devices, which are operating in the Internal Negative Feedback (INF) regime. The method shows a good sensitivity to technological APD parameters making it possible to use it as a tool to analyse various APD parameters. It also allows better understanding of the APD operation conditions. The simulations and experimental data analysis for different types of APDs are presented.
Timko, Jeff; Kuyucak, Serdar
2012-11-28
Polarization is an important component of molecular interactions and is expected to play a particularly significant role in inhomogeneous environments such as pores and interfaces. Here we investigate the effects of polarization in the gramicidin A ion channel by performing quantum mechanics/molecular mechanics molecular dynamics (MD) simulations and comparing the results with those obtained from classical MD simulations with non-polarizable force fields. We consider the dipole moments of backbone carbonyl groups and channel water molecules as well as a number of structural quantities of interest. The ab initio results show that the dipole moments of the carbonyl groups and water molecules are highly sensitive to the hydrogen bonds (H-bonds) they participate in. In the absence of a K(+) ion, water molecules in the channel are quite mobile, making the H-bond network highly dynamic. A central K(+) ion acts as an anchor for the channel waters, stabilizing the H-bond network and thereby increasing their average dipole moments. In contrast, the K(+) ion has little effect on the dipole moments of the neighboring carbonyl groups. The weakness of the ion-peptide interactions helps to explain the near diffusion-rate conductance of K(+) ions through the channel. We also address the sampling issue in relatively short ab initio MD simulations. Results obtained from a continuous 20 ps ab initio MD simulation are compared with those generated by sampling ten windows from a much longer classical MD simulation and running each window for 2 ps with ab initio MD. Both methods yield similar results for a number of quantities of interest, indicating that fluctuations are fast enough to justify the short ab initio MD simulations.
2015-01-01
The reliability of free energy simulations (FES) is limited by two factors: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES. PMID:24803863
König, Gerhard; Hudson, Phillip S; Boresch, Stefan; Woodcock, H Lee
2014-04-08
THE RELIABILITY OF FREE ENERGY SIMULATIONS (FES) IS LIMITED BY TWO FACTORS: (a) the need for correct sampling and (b) the accuracy of the computational method employed. Classical methods (e.g., force fields) are typically used for FES and present a myriad of challenges, with parametrization being a principle one. On the other hand, parameter-free quantum mechanical (QM) methods tend to be too computationally expensive for adequate sampling. One widely used approach is a combination of methods, where the free energy difference between the two end states is computed by, e.g., molecular mechanics (MM), and the end states are corrected by more accurate methods, such as QM or hybrid QM/MM techniques. Here we report two new approaches that significantly improve the aforementioned scheme; with a focus on how to compute corrections between, e.g., the MM and the more accurate QM calculations. First, a molecular dynamics trajectory that properly samples relevant conformational degrees of freedom is generated. Next, potential energies of each trajectory frame are generated with a QM or QM/MM Hamiltonian. Free energy differences are then calculated based on the QM or QM/MM energies using either a non-Boltzmann Bennett approach (QM-NBB) or non-Boltzmann free energy perturbation (NB-FEP). Both approaches are applied to calculate relative and absolute solvation free energies in explicit and implicit solvent environments. Solvation free energy differences (relative and absolute) between ethane and methanol in explicit solvent are used as the initial test case for QM-NBB. Next, implicit solvent methods are employed in conjunction with both QM-NBB and NB-FEP to compute absolute solvation free energies for 21 compounds. These compounds range from small molecules such as ethane and methanol to fairly large, flexible solutes, such as triacetyl glycerol. Several technical aspects were investigated. Ultimately some best practices are suggested for improving methods that seek to connect MM to QM (or QM/MM) levels of theory in FES.
Continuum-kinetic approach to sheath simulations
NASA Astrophysics Data System (ADS)
Cagas, Petr; Hakim, Ammar; Srinivasan, Bhuvana
2016-10-01
Simulations of sheaths are performed using a novel continuum-kinetic model with collisions including ionization/recombination. A discontinuous Galerkin method is used to directly solve the Boltzmann-Poisson system to obtain a particle distribution function. Direct discretization of the distribution function has advantages of being noise-free compared to particle-in-cell methods. The distribution function, which is available at each node of the configuration space, can be readily used to calculate the collision integrals in order to get ionization and recombination operators. Analytical models are used to obtain the cross-sections as a function of energy. Results will be presented incorporating surface physics with a classical sheath in Hall thruster-relevant geometry. This work was sponsored by the Air Force Office of Scientific Research under Grant Number FA9550-15-1-0193.
A Prediction Method of Binding Free Energy of Protein and Ligand
NASA Astrophysics Data System (ADS)
Yang, Kun; Wang, Xicheng
2010-05-01
Predicting the binding free energy is an important problem in bimolecular simulation. Such prediction would be great benefit in understanding protein functions, and may be useful for computational prediction of ligand binding strengths, e.g., in discovering pharmaceutical drugs. Free energy perturbation (FEP)/thermodynamics integration (TI) is a classical method to explicitly predict free energy. However, this method need plenty of time to collect datum, and that attempts to deal with some simple systems and small changes of molecular structures. Another one for estimating ligand binding affinities is linear interaction energy (LIE) method. This method employs averages of interaction potential energy terms from molecular dynamics simulations or other thermal conformational sampling techniques. Incorporation of systematic deviations from electrostatic linear response, derived from free energy perturbation studies, into the absolute binding free energy expression significantly enhances the accuracy of the approach. However, it also is time-consuming work. In this paper, a new prediction method based on steered molecular dynamics (SMD) with direction optimization is developed to compute binding free energy. Jarzynski's equality is used to derive the PMF or free-energy. The results for two numerical examples are presented, showing that the method has good accuracy and efficiency. The novel method can also simulate whole binding proceeding and give some important structural information about development of new drugs.
Theoretical analysis of evaporative cooling of classic heat stroke patients
NASA Astrophysics Data System (ADS)
Alzeer, Abdulaziz H.; Wissler, E. H.
2018-05-01
Heat stroke is a serious health concern globally, which is associated with high mortality. Newer treatments must be designed to improve outcomes. The aim of this study is to evaluate the effect of variations in ambient temperature and wind speed on the rate of cooling in a simulated heat stroke subject using the dynamic model of Wissler. We assume that a 60-year-old 70-kg female suffers classic heat stroke after walking fully exposed to the sun for 4 h while the ambient temperature is 40 °C, relative humidity is 20%, and wind speed is 2.5 m/s-1. Her esophageal and skin temperatures are 41.9 and 40.7 °C at the time of collapse. Cooling is accomplished by misting with lukewarm water while exposed to forced airflow at a temperature of 20 to 40 °C and a velocity of 0.5 or 1 m/s-1. Skin blood flow is assumed to be either normal, one-half of normal, or twice normal. At wind speed of 0.5 m/s-1 and normal skin blood flow, the air temperature decreased from 40 to 20 °C, increased cooling, and reduced time required to reach to a desired temperature of 38 °C. This relationship was also maintained in reduced blood flow states. Increasing wind speed to 1 m/s-1 increased cooling and reduced the time to reach optimal temperature both in normal and reduced skin blood flow states. In conclusion, evaporative cooling methods provide an effective method for cooling classic heat stroke patients. The maximum heat dissipation from the simulated model of Wissler was recorded when the entire body was misted with lukewarm water and applied forced air at 1 m/s at temperature of 20 °C.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
Hybrid genetic algorithm in the Hopfield network for maximum 2-satisfiability problem
NASA Astrophysics Data System (ADS)
Kasihmuddin, Mohd Shareduwan Mohd; Sathasivam, Saratha; Mansor, Mohd. Asyraf
2017-08-01
Heuristic method was designed for finding optimal solution more quickly compared to classical methods which are too complex to comprehend. In this study, a hybrid approach that utilizes Hopfield network and genetic algorithm in doing maximum 2-Satisfiability problem (MAX-2SAT) was proposed. Hopfield neural network was used to minimize logical inconsistency in interpretations of logic clauses or program. Genetic algorithm (GA) has pioneered the implementation of methods that exploit the idea of combination and reproduce a better solution. The simulation incorporated with and without genetic algorithm will be examined by using Microsoft Visual 2013 C++ Express software. The performance of both searching techniques in doing MAX-2SAT was evaluate based on global minima ratio, ratio of satisfied clause and computation time. The result obtained form the computer simulation demonstrates the effectiveness and acceleration features of genetic algorithm in doing MAX-2SAT in Hopfield network.
Babin, Volodymyr; Roland, Christopher; Darden, Thomas A.; Sagui, Celeste
2007-01-01
There is considerable interest in developing methodologies for the accurate evaluation of free energies, especially in the context of biomolecular simulations. Here, we report on a reexamination of the recently developed metadynamics method, which is explicitly designed to probe “rare events” and areas of phase space that are typically difficult to access with a molecular dynamics simulation. Specifically, we show that the accuracy of the free energy landscape calculated with the metadynamics method may be considerably improved when combined with umbrella sampling techniques. As test cases, we have studied the folding free energy landscape of two prototypical peptides: Ace-(Gly)2-Pro-(Gly)3-Nme in vacuo and trialanine solvated by both implicit and explicit water. The method has been implemented in the classical biomolecular code AMBER and is to be distributed in the next scheduled release of the code. © 2006 American Institute of Physics. PMID:17144742
Ion-ion dynamic structure factor of warm dense mixtures
Gill, N. M.; Heinonen, R. A.; Starrett, C. E.; ...
2015-06-25
In this study, the ion-ion dynamic structure factor of warm dense matter is determined using the recently developed pseudoatom molecular dynamics method [Starrett et al., Phys. Rev. E 91, 013104 (2015)]. The method uses density functional theory to determine ion-ion pair interaction potentials that have no free parameters. These potentials are used in classical molecular dynamics simulations. This constitutes a computationally efficient and realistic model of dense plasmas. Comparison with recently published simulations of the ion-ion dynamic structure factor and sound speed of warm dense aluminum finds good to reasonable agreement. Using this method, we make predictions of the ion-ionmore » dynamical structure factor and sound speed of a warm dense mixture—equimolar carbon-hydrogen. This material is commonly used as an ablator in inertial confinement fusion capsules, and our results are amenable to direct experimental measurement.« less
Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A
The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.
Thermokinetic Simulation of Precipitation in NiTi Shape Memory Alloys
NASA Astrophysics Data System (ADS)
Cirstea, C. D.; Karadeniz-Povoden, E.; Kozeschnik, E.; Lungu, M.; Lang, P.; Balagurov, A.; Cirstea, V.
2017-06-01
Considering classical nucleation theory and evolution equations for the growth and composition change of precipitates, we simulate the evolution of the precipitates structure in the classical stages of nucleation, growth and coarsening using the solid-state transformation Matcalc software. The formation of Ni3Ti, Ni4Ti3 or Ni3Ti2 precipitate is the key to hardening phenomenon of the alloys, which depends on the nickel solubility in the bulk alloys. The microstructural evolution of metastable Ni4Ti3 and Ni3Ti2 precipitates in Ni-rich TiNi alloys is simulated by computational thermokinetics, based on thermodynamic and diffusion databases. The simulated precipitate phase fractions are compared with experimental data.
Classical molecular dynamics simulations for non-equilibrium correlated plasmas
NASA Astrophysics Data System (ADS)
Ferri, S.; Calisti, A.; Talin, B.
2017-03-01
A classical molecular dynamics model was recently extended to simulate neutral multi-component plasmas where various charge states of the same atom and electrons coexist. It is used to investigate the plasma effects on the ion charge and on the ionization potential in dense plasmas. Different simulated statistical properties will show that the concept of isolated particles is lost in such correlated plasmas. The charge equilibration is discussed for a carbon plasma at solid density and investigation on the charge distribution and on the ionization potential depression (IPD) for aluminum plasmas is discussed with reference to existing experiments.
Miller, Thomas F; Manolopoulos, David E; Madden, Paul A; Konieczny, Martin; Oberhofer, Harald
2005-02-01
We show that the two phase points considered in the recent simulations of liquid para hydrogen by Hone and Voth lie in the liquid-vapor coexistence region of a purely classical molecular dynamics simulation. By contrast, their phase point for ortho deuterium was in the one-phase liquid region for both classical and quantum simulations. These observations are used to account for their report that quantum mechanical effects enhance the diffusion in liquid para hydrogen and decrease it in ortho deuterium.(c) 2005 American Institute of Physics.
Designing Free Energy Surfaces That Match Experimental Data with Metadynamics
White, Andrew D.; Dama, James F.; Voth, Gregory A.
2015-04-30
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less
Non-Linear Harmonic flow simulations of a High-Head Francis Turbine test case
NASA Astrophysics Data System (ADS)
Lestriez, R.; Amet, E.; Tartinville, B.; Hirsch, C.
2016-11-01
This work investigates the use of the non-linear harmonic (NLH) method for a high- head Francis turbine, the Francis99 workshop test case. The NLH method relies on a Fourier decomposition of the unsteady flow components in harmonics of Blade Passing Frequencies (BPF), which are the fundamentals of the periodic disturbances generated by the adjacent blade rows. The unsteady flow solution is obtained by marching in pseudo-time to a steady-state solution of the transport equations associated with the time-mean, the BPFs and their harmonics. Thanks to this transposition into frequency domain, meshing only one blade channel is sufficient, like for a steady flow simulation. Notable benefits in terms of computing costs and engineering time can therefore be obtained compared to classical time marching approach using sliding grid techniques. The method has been applied for three operating points of the Francis99 workshop high-head Francis turbine. Steady and NLH flow simulations have been carried out for these configurations. Impact of the grid size and near-wall refinement is analysed on all operating points for steady simulations and for Best Efficiency Point (BEP) for NLH simulations. Then, NLH results for a selected grid size are compared for the three different operating points, reproducing the tendencies observed in the experiment.
Designing free energy surfaces that match experimental data with metadynamics.
White, Andrew D; Dama, James F; Voth, Gregory A
2015-06-09
Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.
Machine learning of frustrated classical spin models. I. Principal component analysis
NASA Astrophysics Data System (ADS)
Wang, Ce; Zhai, Hui
2017-10-01
This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.
NASA Astrophysics Data System (ADS)
Ih Choi, Woon; Kim, Kwiseon; Narumanchi, Sreekant
2012-09-01
Thermal resistance between layers impedes effective heat dissipation in electronics packaging applications. Thermal conductance for clean and disordered interfaces between silicon (Si) and aluminum (Al) was computed using realistic Si/Al interfaces and classical molecular dynamics with the modified embedded atom method potential. These realistic interfaces, which include atomically clean as well as disordered interfaces, were obtained using density functional theory. At 300 K, the magnitude of interfacial conductance due to phonon-phonon scattering obtained from the classical molecular dynamics simulations was approximately five times higher than the conductance obtained using analytical elastic diffuse mismatch models. Interfacial disorder reduced the thermal conductance due to increased phonon scattering with respect to the atomically clean interface. Also, the interfacial conductance, due to electron-phonon scattering at the interface, was greater than the conductance due to phonon-phonon scattering. This indicates that phonon-phonon scattering is the bottleneck for interfacial transport at the semiconductor/metal interfaces. The molecular dynamics modeling predictions for interfacial thermal conductance for a 5-nm disordered interface between Si/Al were in-line with recent experimental data in the literature.
Temperature scaling method for Markov chains.
Crosby, Lonnie D; Windus, Theresa L
2009-01-22
The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Kyung; Fan, Liang-Shih; Zhou, Qiang
A new and efficient direct numerical method with second-order convergence accuracy was developed for fully resolved simulations of incompressible viscous flows laden with rigid particles. The method combines the state-of-the-art immersed boundary method (IBM), the multi-direct forcing method, and the lattice Boltzmann method (LBM). First, the multi-direct forcing method is adopted in the improved IBM to better approximate the no-slip/no-penetration (ns/np) condition on the surface of particles. Second, a slight retraction of the Lagrangian grid from the surface towards the interior of particles with a fraction of the Eulerian grid spacing helps increase the convergence accuracy of the method. Anmore » over-relaxation technique in the procedure of multi-direct forcing method and the classical fourth order Runge-Kutta scheme in the coupled fluid-particle interaction were applied. The use of the classical fourth order Runge-Kutta scheme helps the overall IB-LBM achieve the second order accuracy and provides more accurate predictions of the translational and rotational motion of particles. The preexistent code with the first-order convergence rate is updated so that the updated new code can resolve the translational and rotational motion of particles with the second-order convergence rate. The updated code has been validated with several benchmark applications. The efficiency of IBM and thus the efficiency of IB-LBM were improved by reducing the number of the Lagragian markers on particles by using a new formula for the number of Lagrangian markers on particle surfaces. The immersed boundary-lattice Boltzmann method (IBLBM) has been shown to predict correctly the angular velocity of a particle. Prior to examining drag force exerted on a cluster of particles, the updated IB-LBM code along with the new formula for the number of Lagrangian markers has been further validated by solving several theoretical problems. Moreover, the unsteadiness of the drag force is examined when a fluid is accelerated from rest by a constant average pressure gradient toward a steady Stokes flow. The simulation results agree well with the theories for the short- and long-time behavior of the drag force. Flows through non-rotational and rotational spheres in simple cubic arrays and random arrays are simulated over the entire range of packing fractions, and both low and moderate particle Reynolds numbers to compare the simulated results with the literature results and develop a new drag force formula, a new lift force formula, and a new torque formula. Random arrays of solid particles in fluids are generated with Monte Carlo procedure and Zinchenko's method to avoid crystallization of solid particles over high solid volume fractions. A new drag force formula was developed with extensive simulated results to be closely applicable to real processes over the entire range of packing fractions and both low and moderate particle Reynolds numbers. The simulation results indicate that the drag force is barely affected by rotational Reynolds numbers. Drag force is basically unchanged as the angle of the rotating axis varies.« less
Quantum Correlations in Nonlocal Boson Sampling.
Shahandeh, Farid; Lund, Austin P; Ralph, Timothy C
2017-09-22
Determination of the quantum nature of correlations between two spatially separated systems plays a crucial role in quantum information science. Of particular interest is the questions of if and how these correlations enable quantum information protocols to be more powerful. Here, we report on a distributed quantum computation protocol in which the input and output quantum states are considered to be classically correlated in quantum informatics. Nevertheless, we show that the correlations between the outcomes of the measurements on the output state cannot be efficiently simulated using classical algorithms. Crucially, at the same time, local measurement outcomes can be efficiently simulated on classical computers. We show that the only known classicality criterion violated by the input and output states in our protocol is the one used in quantum optics, namely, phase-space nonclassicality. As a result, we argue that the global phase-space nonclassicality inherent within the output state of our protocol represents true quantum correlations.
Fundamental Theory of Crystal Decomposition
1991-05-01
rather than combine them as is often the case in a computation based on the density functional method.4 In the Case of a cluster embedded in a...classical lattice, special care needs to be taken to ensure that mathematical consistency is achieved between the cluster and the embedding lattice. This has...localizing potential or KKLP. Simulation of a large crystallite or an infinite lattice containing a point defect represented by a cluster and a
2013-03-22
discrete Wigner function is periodic in momentum space. The periodicity follows from the Fourier transform of the density matrix. The inverse...resonant-tunneling diode . The Green function method has been one of alternatives. Another alternative was to utilize the Wigner function . The Wigner ... function approach to the simulation of a resonant-tunneling diode offers many advantages. In the limit of the classical physics the Wigner equation
The next GUM and its proposals: a comparison study
NASA Astrophysics Data System (ADS)
Damasceno, J. C.; Couto, P. R. G.
2018-03-01
The Guide to the Expression of Uncertainty in Measurement (GUM) is currently under revision. New proposals for its implementation were circulated in the form of a draft document. Two of the main changes are explored in this work using a Brinell hardness model example. Changes in the evaluation of uncertainty for repeated indications and in the construction of coverage intervals are compared with the classic GUM and with Monte Carlo simulation method.
NASA Astrophysics Data System (ADS)
Ma, Hua; Qu, Shao-Bo; Xu, Zhuo; Zhang, Jie-Qiu; Wang, Jia-Fu
2009-01-01
By using the coordinate transformation method, we have deduced the material parameter equation for rotating elliptical spherical cloaks and carried out simulation as well. The results indicate that the rotating elliptical spherical cloaking shell, which is made of meta-materials whose permittivity and permeability are governed by the equation deduced in this paper, can achieve perfect invisibility by excluding electromagnetic fields from the internal region without disturbing any external field.
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
NASA Astrophysics Data System (ADS)
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar; ...
2015-06-30
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se; Nyman, Gunnar, E-mail: nyman@chem.gu.se
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCWmore » provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe
2013-08-14
Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Cunsolo, Alessandro; Rossky, Peter J
2015-06-28
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.
Molecular dynamics-driven drug discovery: leaping forward with confidence.
Ganesan, Aravindhan; Coote, Michelle L; Barakat, Khaled
2017-02-01
Given the significant time and financial costs of developing a commercial drug, it remains important to constantly reform the drug discovery pipeline with novel technologies that can narrow the candidates down to the most promising lead compounds for clinical testing. The past decade has witnessed tremendous growth in computational capabilities that enable in silico approaches to expedite drug discovery processes. Molecular dynamics (MD) has become a particularly important tool in drug design and discovery. From classical MD methods to more sophisticated hybrid classical/quantum mechanical (QM) approaches, MD simulations are now able to offer extraordinary insights into ligand-receptor interactions. In this review, we discuss how the applications of MD approaches are significantly transforming current drug discovery and development efforts. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
NASA Astrophysics Data System (ADS)
Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.
2018-07-01
Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.
Multiscale methods for computational RNA enzymology
Panteva, Maria T.; Dissanayake, Thakshila; Chen, Haoyuan; Radak, Brian K.; Kuechler, Erich R.; Giambaşu, George M.; Lee, Tai-Sung; York, Darrin M.
2016-01-01
RNA catalysis is of fundamental importance to biology and yet remains ill-understood due to its complex nature. The multi-dimensional “problem space” of RNA catalysis includes both local and global conformational rearrangements, changes in the ion atmosphere around nucleic acids and metal ion binding, dependence on potentially correlated protonation states of key residues and bond breaking/forming in the chemical steps of the reaction. The goal of this article is to summarize and apply multiscale modeling methods in an effort to target the different parts of the RNA catalysis problem space while also addressing the limitations and pitfalls of these methods. Classical molecular dynamics (MD) simulations, reference interaction site model (RISM) calculations, constant pH molecular dynamics (CpHMD) simulations, Hamiltonian replica exchange molecular dynamics (HREMD) and quantum mechanical/molecular mechanical (QM/MM) simulations will be discussed in the context of the study of RNA backbone cleavage transesterification. This reaction is catalyzed by both RNA and protein enzymes, and here we examine the different mechanistic strategies taken by the hepatitis delta virus ribozyme (HDVr) and RNase A. PMID:25726472
Characterization of the geometry and topology of DNA pictured as a discrete collection of atoms
Olson, Wilma K.
2014-01-01
The structural and physical properties of DNA are closely related to its geometry and topology. The classical mathematical treatment of DNA geometry and topology in terms of ideal smooth space curves was not designed to characterize the spatial arrangements of atoms found in high-resolution and simulated double-helical structures. We present here new and rigorous numerical methods for the rapid and accurate assessment of the geometry and topology of double-helical DNA structures in terms of the constituent atoms. These methods are well designed for large DNA datasets obtained in detailed numerical simulations or determined experimentally at high-resolution. We illustrate the usefulness of our methodology by applying it to the analysis of three canonical double-helical DNA chains, a 65-bp minicircle obtained in recent molecular dynamics simulations, and a crystallographic array of protein-bound DNA duplexes. Although we focus on fully base-paired DNA structures, our methods can be extended to treat the geometry and topology of melted DNA structures as well as to characterize the folding of arbitrary molecules such as RNA and cyclic peptides. PMID:24791158
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
Xia, Yidong; Wang, Chuanjin; Luo, Hong; ...
2015-12-15
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
Assessment of a hybrid finite element and finite volume code for turbulent incompressible flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Wang, Chuanjin; Luo, Hong
Hydra-TH is a hybrid finite-element/finite-volume incompressible/low-Mach flow simulation code based on the Hydra multiphysics toolkit being developed and used for thermal-hydraulics applications. In the present work, a suite of verification and validation (V&V) test problems for Hydra-TH was defined to meet the design requirements of the Consortium for Advanced Simulation of Light Water Reactors (CASL). The intent for this test problem suite is to provide baseline comparison data that demonstrates the performance of the Hydra-TH solution methods. The simulation problems vary in complexity from laminar to turbulent flows. A set of RANS and LES turbulence models were used in themore » simulation of four classical test problems. Numerical results obtained by Hydra-TH agreed well with either the available analytical solution or experimental data, indicating the verified and validated implementation of these turbulence models in Hydra-TH. Where possible, we have attempted some form of solution verification to identify sensitivities in the solution methods, and to suggest best practices when using the Hydra-TH code.« less
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
Paul, Amit K; Hase, William L
2016-01-28
A zero-point energy (ZPE) constraint model is proposed for classical trajectory simulations of unimolecular decomposition and applied to CH4* → H + CH3 decomposition. With this model trajectories are not allowed to dissociate unless they have ZPE in the CH3 product. If not, they are returned to the CH4* region of phase space and, if necessary, given additional opportunities to dissociate with ZPE. The lifetime for dissociation of an individual trajectory is the time it takes to dissociate with ZPE in CH3, including multiple possible returns to CH4*. With this ZPE constraint the dissociation of CH4* is exponential in time as expected for intrinsic RRKM dynamics and the resulting rate constant is in good agreement with the harmonic quantum value of RRKM theory. In contrast, a model that discards trajectories without ZPE in the reaction products gives a CH4* → H + CH3 rate constant that agrees with the classical and not quantum RRKM value. The rate constant for the purely classical simulation indicates that anharmonicity may be important and the rate constant from the ZPE constrained classical trajectory simulation may not represent the complete anharmonicity of the RRKM quantum dynamics. The ZPE constraint model proposed here is compared with previous models for restricting ZPE flow in intramolecular dynamics, and connecting product and reactant/product quantum energy levels in chemical dynamics simulations.
Quantum-classical correspondence in the vicinity of periodic orbits
NASA Astrophysics Data System (ADS)
Kumari, Meenu; Ghose, Shohini
2018-05-01
Quantum-classical correspondence in chaotic systems is a long-standing problem. We describe a method to quantify Bohr's correspondence principle and calculate the size of quantum numbers for which we can expect to observe quantum-classical correspondence near periodic orbits of Floquet systems. Our method shows how the stability of classical periodic orbits affects quantum dynamics. We demonstrate our method by analyzing quantum-classical correspondence in the quantum kicked top (QKT), which exhibits both regular and chaotic behavior. We use our correspondence conditions to identify signatures of classical bifurcations even in a deep quantum regime. Our method can be used to explain the breakdown of quantum-classical correspondence in chaotic systems.
On the accuracy of the LSC-IVR approach for excitation energy transfer in molecular aggregates
NASA Astrophysics Data System (ADS)
Teh, Hung-Hsuan; Cheng, Yuan-Chung
2017-04-01
We investigate the applicability of the linearized semiclassical initial value representation (LSC-IVR) method to excitation energy transfer (EET) problems in molecular aggregates by simulating the EET dynamics of a dimer model in a wide range of parameter regime and comparing the results to those obtained from a numerically exact method. It is found that the LSC-IVR approach yields accurate population relaxation rates and decoherence rates in a broad parameter regime. However, the classical approximation imposed by the LSC-IVR method does not satisfy the detailed balance condition, generally leading to incorrect equilibrium populations. Based on this observation, we propose a post-processing algorithm to solve the long time equilibrium problem and demonstrate that this long-time correction method successfully removed the deviations from exact results for the LSC-IVR method in all of the regimes studied in this work. Finally, we apply the LSC-IVR method to simulate EET dynamics in the photosynthetic Fenna-Matthews-Olson complex system, demonstrating that the LSC-IVR method with long-time correction provides excellent description of coherent EET dynamics in this typical photosynthetic pigment-protein complex.
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
NASA Astrophysics Data System (ADS)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-01
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
Ab Initio Calculations of Transport in Titanium and Aluminum Mixtures
NASA Astrophysics Data System (ADS)
Walker, Nicholas; Novak, Brian; Tam, Ka Ming; Moldovan, Dorel; Jarrell, Mark
In classical molecular dynamics simulations, the self-diffusion and shear viscosity of titanium about the melting point have fallen within the ranges provided by experimental data. However, the experimental data is difficult to collect and has been rather scattered, making it of limited value for the validation of these calculations. By using ab initio molecular dynamics simulations within the density functional theory framework, the classical molecular dynamics data can be validated. The dynamical data from the ab initio molecular dynamics can also be used to calculate new potentials for use in classical molecular dynamics, allowing for more accurate classical dynamics simulations for the liquid phase. For metallic materials such as titanium and aluminum alloys, these calculations are very valuable due to an increasing demand for the knowledge of their thermophysical properties that drive the development of new materials. For example, alongside knowledge of the surface tension, viscosity is an important input for modeling the additive manufacturing process at the continuum level. We are developing calculations of the viscosity along with the self-diffusion for aluminum, titanium, and titanium-aluminum alloys with ab initio molecular dynamics. Supported by the National Science Foundation through cooperative agreement OIA-1541079 and the Louisiana Board of Regents.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.
NASA Astrophysics Data System (ADS)
Gruska, Jozef
2012-06-01
One of the most basic tasks in quantum information processing, communication and security (QIPCC) research, theoretically deep and practically important, is to find bounds on how really important are inherently quantum resources for speeding up computations. This area of research is bringing a variety of results that imply, often in a very unexpected and counter-intuitive way, that: (a) surprisingly large classes of quantum circuits and algorithms can be efficiently simulated on classical computers; (b) the border line between quantum processes that can and cannot be efficiently simulated on classical computers is often surprisingly thin; (c) the addition of a seemingly very simple resource or a tool often enormously increases the power of available quantum tools. These discoveries have put also a new light on our understanding of quantum phenomena and quantum physics and on the potential of its inherently quantum and often mysteriously looking phenomena. The paper motivates and surveys research and its outcomes in the area of de-quantisation, especially presents various approaches and their outcomes concerning efficient classical simulations of various families of quantum circuits and algorithms. To motivate this area of research some outcomes in the area of de-randomization of classical randomized computations.
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...
2017-03-28
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex
Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less
Estimating Tree Height-Diameter Models with the Bayesian Method
Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the “best” model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2. PMID:24711733
Estimating tree height-diameter models with the Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo; Xiang, Congwei
2014-01-01
Six candidate height-diameter models were used to analyze the height-diameter relationships. The common methods for estimating the height-diameter models have taken the classical (frequentist) approach based on the frequency interpretation of probability, for example, the nonlinear least squares method (NLS) and the maximum likelihood method (ML). The Bayesian method has an exclusive advantage compared with classical method that the parameters to be estimated are regarded as random variables. In this study, the classical and Bayesian methods were used to estimate six height-diameter models, respectively. Both the classical method and Bayesian method showed that the Weibull model was the "best" model using data1. In addition, based on the Weibull model, data2 was used for comparing Bayesian method with informative priors with uninformative priors and classical method. The results showed that the improvement in prediction accuracy with Bayesian method led to narrower confidence bands of predicted value in comparison to that for the classical method, and the credible bands of parameters with informative priors were also narrower than uninformative priors and classical method. The estimated posterior distributions for parameters can be set as new priors in estimating the parameters using data2.
Classical methods and modern analysis for studying fungal diversity
John Paul Schmit
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Classical Methods and Modern Analysis for Studying Fungal Diversity
J. P. Schmit; D. J. Lodge
2005-01-01
In this chapter, we examine the use of classical methods to study fungal diversity. Classical methods rely on the direct observation of fungi, rather than sampling fungal DNA. We summarize a wide variety of classical methods, including direct sampling of fungal fruiting bodies, incubation of substrata in moist chambers, culturing of endophytes, and particle plating. We...
Lattice Boltzmann Simulation of Blood Flow in Blood Vessels with the Rolling Massage
NASA Astrophysics Data System (ADS)
Yi, Hou-Hui; Xu, Shi-Xiong; Qian, Yue-Hong; Fang, Hai-Ping
2005-12-01
The rolling massage manipulation is a classic Chinese massage, which is expected to improve the circulation by pushing, pulling and kneading of the muscle. A model for the rolling massage manipulation is proposed and the lattice Boltzmann method is applied to study the blood flow in the blood vessels. The simulation results show that the blood flux is considerably modified by the rolling massage and the explicit value depends on the rolling frequency, the rolling depth, and the diameter of the vessel. The smaller the diameter of the blood vessel, the larger the enhancement of the blood flux by the rolling massage. The model, together with the simulation results, is expected to be helpful to understand the mechanism and further development of rolling massage techniques.
Remedial options for creosote-contaminated sites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, W.J.; Delshad, M.; Oolman, T.
2000-03-31
Free-phase DNAPL recovery operations are becoming increasingly prevalent at creosote-contaminated aquifer sites. This paper illustrates the potential of both classical and innovative recovery methods. The UTCHEM multiphase flow and transport numerical simulator was used to predict the migration of creosote DNAPL during a hypothetical spill event, during a long-term redistribution after the spill, and for a variety of subsequent free-phase DNAPL recovery operations. The physical parameters used for the DNAPL and the aquifer in the model are estimates for the DNAPL and the aquifer in the model are estimates for a specific creosote DNAPL site. Other simulations were also conductedmore » using physical parameters that are typical of a trichloroethene (TCE) DNAPL. Dramatic differences in DNAPL migration were observed between these simulations.« less
Research on transient thermal process of a friction brake during repetitive cycles of operation
NASA Astrophysics Data System (ADS)
Slavchev, Yanko; Dimitrov, Lubomir; Dimitrov, Yavor
2017-12-01
Simplified models are used in the classical engineering analyses of the friction brake heating temperature during repetitive cycles of operation to determine basically the maximum and minimum brake temperatures. The objective of the present work is to broaden and complement the possibilities for research through a model that is based on the classical scheme of the Newton's law of cooling and improves the studies by adding a disturbance function for a corresponding braking process. A general case of braking in non-periodic repetitive mode is considered, for which a piecewise function is defined to apply pulse thermal loads to the system. Cases with rectangular and triangular waveforms are presented. Periodic repetitive braking process is also studied using a periodic rectangular waveform until a steady thermal state is achieved. Different numerical methods such as the Euler's method, the classical fourth order Runge-Kutta (RK4) and the Runge-Kutta-Fehlberg 4-5 (RKF45) are used to solve the non-linear differential equation of the model. The constructed model allows during pre-engineering calculations to be determined effectively the time for reaching the steady thermal state of the brake, to be simulated actual braking modes in vehicles and material handling machines, and to be accounted for the thermal impact when performing fatigue calculations.
Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions
Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán
2013-01-01
Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954
Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.
Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal
2018-01-01
Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.
Discrete-time modelling of musical instruments
NASA Astrophysics Data System (ADS)
Välimäki, Vesa; Pakarinen, Jyri; Erkut, Cumhur; Karjalainen, Matti
2006-01-01
This article describes physical modelling techniques that can be used for simulating musical instruments. The methods are closely related to digital signal processing. They discretize the system with respect to time, because the aim is to run the simulation using a computer. The physics-based modelling methods can be classified as mass-spring, modal, wave digital, finite difference, digital waveguide and source-filter models. We present the basic theory and a discussion on possible extensions for each modelling technique. For some methods, a simple model example is chosen from the existing literature demonstrating a typical use of the method. For instance, in the case of the digital waveguide modelling technique a vibrating string model is discussed, and in the case of the wave digital filter technique we present a classical piano hammer model. We tackle some nonlinear and time-varying models and include new results on the digital waveguide modelling of a nonlinear string. Current trends and future directions in physical modelling of musical instruments are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
Statistically Modeling I-V Characteristics of CNT-FET with LASSO
NASA Astrophysics Data System (ADS)
Ma, Dongsheng; Ye, Zuochang; Wang, Yan
2017-08-01
With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.
A cut-cell immersed boundary technique for fire dynamics simulation
NASA Astrophysics Data System (ADS)
Vanella, Marcos; McDermott, Randall; Forney, Glenn
2015-11-01
Fire simulation around complex geometry is gaining increasing attention in performance based design of fire protection systems, fire-structure interaction and pollutant transport in complex terrains, among others. This presentation will focus on our present effort in improving the capability of FDS (Fire Dynamics Simulator, developed at the Fire Research Division, NIST. https://github.com/firemodels/fds-smv) to represent fire scenarios around complex bodies. Velocities in the vicinity of the bodies are reconstructed using a classical immersed boundary scheme (Fadlun and co-workers, J. Comput. Phys., 161:35-60, 2000). Also, a conservative treatment of scalar transport equations (i.e. for chemical species) will be presented. In our method, discrete conservation and no penetration of species across solid boundaries are enforced using a cut-cell finite volume scheme. The small cell problem inherent to the method is tackled using explicit-implicit domain decomposition for scalar, within the FDS time integration scheme. Some details on the derivation, implementation and numerical tests of this numerical scheme will be discussed.
NASA Astrophysics Data System (ADS)
Fan, Xiaofeng; Wang, Jiangfeng
2016-06-01
The atomization of liquid fuel is a kind of intricate dynamic process from continuous phase to discrete phase. Procedures of fuel spray in supersonic flow are modeled with an Eulerian-Lagrangian computational fluid dynamics methodology. The method combines two distinct techniques and develops an integrated numerical simulation method to simulate the atomization processes. The traditional finite volume method based on stationary (Eulerian) Cartesian grid is used to resolve the flow field, and multi-component Navier-Stokes equations are adopted in present work, with accounting for the mass exchange and heat transfer occupied by vaporization process. The marker-based moving (Lagrangian) grid is utilized to depict the behavior of atomized liquid sprays injected into a gaseous environment, and discrete droplet model 13 is adopted. To verify the current approach, the proposed method is applied to simulate processes of liquid atomization in supersonic cross flow. Three classic breakup models, TAB model, wave model and K-H/R-T hybrid model, are discussed. The numerical results are compared with multiple perspectives quantitatively, including spray penetration height and droplet size distribution. In addition, the complex flow field structures induced by the presence of liquid spray are illustrated and discussed. It is validated that the maker-based Eulerian-Lagrangian method is effective and reliable.
Vermorel, Romain; Oulebsir, Fouad; Galliero, Guillaume
2017-09-14
The computation of diffusion coefficients in molecular systems ranks among the most useful applications of equilibrium molecular dynamics simulations. However, when dealing with the problem of fluid diffusion through vanishingly thin interfaces, classical techniques are not applicable. This is because the volume of space in which molecules diffuse is ill-defined. In such conditions, non-equilibrium techniques allow for the computation of transport coefficients per unit interface width, but their weak point lies in their inability to isolate the contribution of the different physical mechanisms prone to impact the flux of permeating molecules. In this work, we propose a simple and accurate method to compute the diffusional transport coefficient of a pure fluid through a planar interface from equilibrium molecular dynamics simulations, in the form of a diffusion coefficient per unit interface width. In order to demonstrate its validity and accuracy, we apply our method to the case study of a dilute gas diffusing through a smoothly repulsive single-layer porous solid. We believe this complementary technique can benefit to the interpretation of the results obtained on single-layer membranes by means of complex non-equilibrium methods.
Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Brunett, Acacia
2015-04-26
The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less
A thermodynamically consistent discontinuous Galerkin formulation for interface separation
Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...
2015-07-31
Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less
Energy diffusion controlled reaction rate of reacting particle driven by broad-band noise
NASA Astrophysics Data System (ADS)
Deng, M. L.; Zhu, W. Q.
2007-10-01
The energy diffusion controlled reaction rate of a reacting particle with linear weak damping and broad-band noise excitation is studied by using the stochastic averaging method. First, the stochastic averaging method for strongly nonlinear oscillators under broad-band noise excitation using generalized harmonic functions is briefly introduced. Then, the reaction rate of the classical Kramers' reacting model with linear weak damping and broad-band noise excitation is investigated by using the stochastic averaging method. The averaged Itô stochastic differential equation describing the energy diffusion and the Pontryagin equation governing the mean first-passage time (MFPT) are established. The energy diffusion controlled reaction rate is obtained as the inverse of the MFPT by solving the Pontryagin equation. The results of two special cases of broad-band noises, i.e. the harmonic noise and the exponentially corrected noise, are discussed in details. It is demonstrated that the general expression of reaction rate derived by the authors can be reduced to the classical ones via linear approximation and high potential barrier approximation. The good agreement with the results of the Monte Carlo simulation verifies that the reaction rate can be well predicted using the stochastic averaging method.
Guillaume, François; Fritz, Sébastien; Boichard, Didier; Druet, Tom
2008-01-01
The efficiency of the French marker-assisted selection (MAS) was estimated by a simulation study. The data files of two different time periods were used: April 2004 and 2006. The simulation method used the structure of the existing French MAS: same pedigree, same marker genotypes and same animals with records. The program simulated breeding values and new records based on this existing structure and knowledge on the QTL used in MAS (variance and frequency). Reliabilities of genetic values of young animals (less than one year old) obtained with and without marker information were compared to assess the efficiency of MAS for evaluation of milk, fat and protein yields and fat and protein contents. Mean gains of reliability ranged from 0.015 to 0.094 and from 0.038 to 0.114 in 2004 and 2006, respectively. The larger number of animals genotyped and the use of a new set of genetic markers can explain the improvement of MAS reliability from 2004 to 2006. This improvement was also observed by analysis of information content for young candidates. The gain of MAS reliability with respect to classical selection was larger for sons of sires with genotyped progeny daughters with records. Finally, it was shown that when superiority of MAS over classical selection was estimated with daughter yield deviations obtained after progeny test instead of true breeding values, the gain was underestimated. PMID:18096117
Simulation of surface processes
Jónsson, Hannes
2011-01-01
Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939
Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.
Wu, Zuobao; Weng, Michael X
2005-04-01
Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.
Modeling and 2-D discrete simulation of dislocation dynamics for plastic deformation of metal
NASA Astrophysics Data System (ADS)
Liu, Juan; Cui, Zhenshan; Ou, Hengan; Ruan, Liqun
2013-05-01
Two methods are employed in this paper to investigate the dislocation evolution during plastic deformation of metal. One method is dislocation dynamic simulation of two-dimensional discrete dislocation dynamics (2D-DDD), and the other is dislocation dynamics modeling by means of nonlinear analysis. As screw dislocation is prone to disappear by cross-slip, only edge dislocation is taken into account in simulation. First, an approach of 2D-DDD is used to graphically simulate and exhibit the collective motion of a large number of discrete dislocations. In the beginning, initial grains are generated in the simulation cells according to the mechanism of grain growth and the initial dislocation is randomly distributed in grains and relaxed under the internal stress. During the simulation process, the externally imposed stress, the long range stress contribution of all dislocations and the short range stress caused by the grain boundaries are calculated. Under the action of these forces, dislocations begin to glide, climb, multiply, annihilate and react with each other. Besides, thermal activation process is included. Through the simulation, the distribution of dislocation and the stress-strain curves can be obtained. On the other hand, based on the classic dislocation theory, the variation of the dislocation density with time is described by nonlinear differential equations. Finite difference method (FDM) is used to solve the built differential equations. The dislocation evolution at a constant strain rate is taken as an example to verify the rationality of the model.
Simulating Thin Sheets: Buckling, Wrinkling, Folding and Growth
NASA Astrophysics Data System (ADS)
Vetter, Roman; Stoop, Norbert; Wittel, Falk K.; Herrmann, Hans J.
2014-03-01
Numerical simulations of thin sheets undergoing large deformations are computationally challenging. Depending on the scenario, they may spontaneously buckle, wrinkle, fold, or crumple. Nature's thin tissues often experience significant anisotropic growth, which can act as the driving force for such instabilities. We use a recently developed finite element model to simulate the rich variety of nonlinear responses of Kirchhoff-Love sheets. The model uses subdivision surface shape functions in order to guarantee convergence of the method, and to allow a finite element description of anisotropically growing sheets in the classical Rayleigh-Ritz formalism. We illustrate the great potential in this approach by simulating the inflation of airbags, the buckling of a stretched cylinder, as well as the formation and scaling of wrinkles at free boundaries of growing sheets. Finally, we compare the folding of spatially confined sheets subject to growth and shrinking confinement to find that the two processes are equivalent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozas, R. E.; Department of Physics, University of Bío-Bío, Av. Collao 1202, P.O. Box 5C, Concepción; Demiraǧ, A. D.
Thermophysical properties of liquid nickel (Ni) around the melting temperature are investigated by means of classical molecular dynamics (MD) simulation, using three different embedded atom method potentials to model the interactions between the Ni atoms. Melting temperature, enthalpy, static structure factor, self-diffusion coefficient, shear viscosity, and thermal diffusivity are compared to recent experimental results. Using ab initio MD simulation, we also determine the static structure factor and the mean-squared displacement at the experimental melting point. For most of the properties, excellent agreement is found between experiment and simulation, provided the comparison relative to the corresponding melting temperature. We discuss themore » validity of the Hansen-Verlet criterion for the static structure factor as well as the Stokes-Einstein relation between self-diffusion coefficient and shear viscosity. The thermal diffusivity is extracted from the autocorrelation function of a wavenumber-dependent temperature fluctuation variable.« less
Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-01-01
Background Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students. PMID:26888076
Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2016-11-01
Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.
Introduction to Quantum Intelligence
NASA Technical Reports Server (NTRS)
Zak, Michail
1996-01-01
An impact of ideas associated with the concept of a hypothetical quantum computer upon classical computing is analyzed. Two fundamental properties of quantum computing: direct simulations of probabilities, and influence between different branches of probabilistic scenarios, as well as their classical versions, are discussed.
Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control
NASA Technical Reports Server (NTRS)
Jacklin, Stephen A.
1998-01-01
This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.
Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy
2001-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Cabello, Adán
2018-03-01
We consider an ideal experiment in which unlimited nonprojective quantum measurements are sequentially performed on a system that is initially entangled with a distant one. At each step of the sequence, the measurements are randomly chosen between two. However, regardless of which measurement is chosen or which outcome is obtained, the quantum state of the pair always remains entangled. We show that the classical simulation of the reduced state of the distant system requires not only unlimited rounds of communication, but also that the distant system has infinite memory. Otherwise, a thermodynamical argument predicts heating at a distance. Our proposal can be used for experimentally ruling out nonlocal finite-memory classical models of quantum theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamada, Atsushi; Kojima, Hidekazu; Okazaki, Susumu, E-mail: okazaki@apchem.nagoya-u.ac.jp
2014-08-28
In order to investigate proton transfer reaction in solution, mixed quantum-classical molecular dynamics calculations have been carried out based on our previously proposed quantum equation of motion for the reacting system [A. Yamada and S. Okazaki, J. Chem. Phys. 128, 044507 (2008)]. Surface hopping method was applied to describe forces acting on the solvent classical degrees of freedom. In a series of our studies, quantum and solvent effects on the reaction dynamics in solutions have been analysed in detail. Here, we report our mixed quantum-classical molecular dynamics calculations for intramolecular proton transfer of malonaldehyde in water. Thermally activated proton transfermore » process, i.e., vibrational excitation in the reactant state followed by transition to the product state and vibrational relaxation in the product state, as well as tunneling reaction can be described by solving the equation of motion. Zero point energy is, of course, included, too. The quantum simulation in water has been compared with the fully classical one and the wave packet calculation in vacuum. The calculated quantum reaction rate in water was 0.70 ps{sup −1}, which is about 2.5 times faster than that in vacuum, 0.27 ps{sup −1}. This indicates that the solvent water accelerates the reaction. Further, the quantum calculation resulted in the reaction rate about 2 times faster than the fully classical calculation, which indicates that quantum effect enhances the reaction rate, too. Contribution from three reaction mechanisms, i.e., tunneling, thermal activation, and barrier vanishing reactions, is 33:46:21 in the mixed quantum-classical calculations. This clearly shows that the tunneling effect is important in the reaction.« less
Efficient Classical Algorithm for Boson Sampling with Partially Distinguishable Photons
NASA Astrophysics Data System (ADS)
Renema, J. J.; Menssen, A.; Clements, W. R.; Triginer, G.; Kolthammer, W. S.; Walmsley, I. A.
2018-06-01
We demonstrate how boson sampling with photons of partial distinguishability can be expressed in terms of interference of fewer photons. We use this observation to propose a classical algorithm to simulate the output of a boson sampler fed with photons of partial distinguishability. We find conditions for which this algorithm is efficient, which gives a lower limit on the required indistinguishability to demonstrate a quantum advantage. Under these conditions, adding more photons only polynomially increases the computational cost to simulate a boson sampling experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perera, Meewanage Dilina N; Li, Ying Wai; Eisenbach, Markus
We describe the study of thermodynamics of materials using replica-exchange Wang Landau (REWL) sampling, a generic framework for massively parallel implementations of the Wang Landau Monte Carlo method. To evaluate the performance and scalability of the method, we investigate the magnetic phase transition in body-centered cubic (bcc) iron using the classical Heisenberg model parameterized with first principles calculations. We demonstrate that our framework leads to a significant speedup without compromising the accuracy and precision and facilitates the study of much larger systems than is possible with its serial counterpart.
Quantifying non-linear dynamics of mass-springs in series oscillators via asymptotic approach
NASA Astrophysics Data System (ADS)
Starosta, Roman; Sypniewska-Kamińska, Grażyna; Awrejcewicz, Jan
2017-05-01
Dynamical regular response of an oscillator with two serially connected springs with nonlinear characteristics of cubic type and governed by a set of differential-algebraic equations (DAEs) is studied. The classical approach of the multiple scales method (MSM) in time domain has been employed and appropriately modified to solve the governing DAEs of two systems, i.e. with one- and two degrees-of-freedom. The approximate analytical solutions have been verified by numerical simulations.
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Gong, Feng; Lu, Anjiang; Zhang, Damin; Zhang, Zhengping
2017-12-01
In this paper, we propose a scheme that integrates quantum key distribution and private classical communication via continuous variables. The integrated scheme employs both quadratures of a weak coherent state, with encrypted bits encoded on the signs and Gaussian random numbers encoded on the values of the quadratures. The integration enables quantum and classical data to share the same physical and logical channel. Simulation results based on practical system parameters demonstrate that both classical communication and quantum communication can be implemented over distance of tens of kilometers, thus providing a potential solution for simultaneous transmission of quantum communication and classical communication.
NASA Astrophysics Data System (ADS)
Crum, Dax M.; Valsaraj, Amithraj; David, John K.; Register, Leonard F.; Banerjee, Sanjay K.
2016-12-01
Particle-based ensemble semi-classical Monte Carlo (MC) methods employ quantum corrections (QCs) to address quantum confinement and degenerate carrier populations to model tomorrow's ultra-scaled metal-oxide-semiconductor-field-effect-transistors. Here, we present the most complete treatment of quantum confinement and carrier degeneracy effects in a three-dimensional (3D) MC device simulator to date, and illustrate their significance through simulation of n-channel Si and III-V FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering intrinsically incompatible with degenerate statistics. Quantum-confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use these valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. FinFET simulations are used to illustrate the contributions of each of these QCs. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered In0.53Ga0.47 As FinFETs over otherwise identical Si FinFETs despite higher thermal velocities in In0.53Ga0.47 As. It also may be possible to extend these basic uses of QCPs, however calculated, to still more computationally efficient drift-diffusion and hydrodynamic simulations, and the basic concepts even to compact device modeling.
Mathematical model of the SH-3G helicopter
NASA Technical Reports Server (NTRS)
Phillips, J. D.
1982-01-01
A mathematical model of the Sikorsky SH-3G helicopter based on classical nonlinear, quasi-steady rotor theory was developed. The model was validated statically and dynamically by comparison with Navy flight-test data. The model incorporates ad hoc revisions which address the ideal assumptions of classical rotor theory and improve the static trim characteristics to provide a more realistic simulation, while retaining the simplicity of the classical model.
Reconfigurable Flight Control Designs With Application to the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zhenglu
1999-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the right body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
A Monte Carlo (N,V,T) study of the stability of charged interfaces: A simulation on a hypersphere
NASA Astrophysics Data System (ADS)
Delville, A.; Pellenq, R. J.-M.; Caillol, J. M.
1997-05-01
We have used an exact expression of the Coulombic interactions derived on a hypersphere of an Euclidian space of dimension four to determine the swelling behavior of two infinite charged plates neutralized by exchangeable counterions. Monte Carlo simulations in the (N,V,T) ensemble allows for a derivation of short-ranged hard core repulsions and long-ranged electrostatic forces, which are the two components of the interionic forces in the context of the primitive model. Comparison with numerical results obtained by a classical Euclidian method illustrates the efficiency of the hyperspherical approach, especially at strong coupling between the charged particles, i.e., for divalent counterions and small plate separation.
A fast ultrasonic simulation tool based on massively parallel implementations
NASA Astrophysics Data System (ADS)
Lambert, Jason; Rougeron, Gilles; Lacassagne, Lionel; Chatillon, Sylvain
2014-02-01
This paper presents a CIVA optimized ultrasonic inspection simulation tool, which takes benefit of the power of massively parallel architectures: graphical processing units (GPU) and multi-core general purpose processors (GPP). This tool is based on the classical approach used in CIVA: the interaction model is based on Kirchoff, and the ultrasonic field around the defect is computed by the pencil method. The model has been adapted and parallelized for both architectures. At this stage, the configurations addressed by the tool are : multi and mono-element probes, planar specimens made of simple isotropic materials, planar rectangular defects or side drilled holes of small diameter. Validations on the model accuracy and performances measurements are presented.
Design Of Combined Stochastic Feedforward/Feedback Control
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1989-01-01
Methodology accommodates variety of control structures and design techniques. In methodology for combined stochastic feedforward/feedback control, main objectives of feedforward and feedback control laws seen clearly. Inclusion of error-integral feedback, dynamic compensation, rate-command control structure, and like integral element of methodology. Another advantage of methodology flexibility to develop variety of techniques for design of feedback control with arbitrary structures to obtain feedback controller: includes stochastic output feedback, multiconfiguration control, decentralized control, or frequency and classical control methods. Control modes of system include capture and tracking of localizer and glideslope, crab, decrab, and flare. By use of recommended incremental implementation, control laws simulated on digital computer and connected with nonlinear digital simulation of aircraft and its systems.
Accelerated path-integral simulations using ring-polymer interpolation
NASA Astrophysics Data System (ADS)
Buxton, Samuel J.; Habershon, Scott
2017-12-01
Imaginary-time path-integral (PI) molecular simulations can be used to calculate exact quantum statistical mechanical properties for complex systems containing many interacting atoms and molecules. The limiting computational factor in a PI simulation is typically the evaluation of the potential energy surface (PES) and forces at each ring-polymer "bead"; for an n-bead ring-polymer, a PI simulation is typically n times greater than the corresponding classical simulation. To address the increased computational effort of PI simulations, several approaches have been developed recently, most notably based on the idea of ring-polymer contraction which exploits either the separation of the PES into short-range and long-range contributions or the availability of a computationally inexpensive PES which can be incorporated to effectively smooth the ring-polymer PES; neither approach is satisfactory in applications to systems modeled by PESs given by on-the-fly ab initio calculations. In this article, we describe a new method, ring-polymer interpolation (RPI), which can be used to accelerate PI simulations without any prior assumptions about the PES. In simulations of liquid water modeled by an empirical PES (or force field) under ambient conditions, where quantum effects are known to play a subtle role in influencing experimental observables such as radial distribution functions, we find that RPI can accurately reproduce the results of fully-converged PI simulations, albeit with far fewer PES evaluations. This approach therefore opens the possibility of large-scale PI simulations using ab initio PESs evaluated on-the-fly without the drawbacks of current methods.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
Software-defined Quantum Networking Ecosystem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humble, Travis S.; Sadlier, Ronald
The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less
Molecular dynamics simulation of metallic impurity diffusion in liquid lead-bismuth eutectic (LBE)
NASA Astrophysics Data System (ADS)
Gao, Yun; Takahashi, Minoru; Cavallotti, Carlo; Raos, Guido
2018-04-01
Corrosion of stainless steels by lead-bismuth eutectic (LBE) is an important problem which depends, amongst other things, on the diffusion of the steel components inside this liquid alloy. Here we present the results of classical molecular dynamics simulations of the diffusion of Fe and Ni within LBE. The simulations complement experimental studies of impurity diffusion by our group and provide an atomic-level understanding of the relevant diffusion phenomena. They are based on the embedded atom method (EAM) to represent many-body interactions among atoms. The EAM potentials employed in our simulations have been validated against ab initio density functional calculations. We show that the experimental and simulation results for the temperature-dependent viscosity of LBE and the impurity diffusion coefficients can be reconciled by assuming that the Ni and Fe diffuse mainly as nanoscopic clusters below 1300 K. The average Fe and Ni cluster sizes decrease with increasing the temperature and there is essentially single-atom diffusion at higher temperatures.
Simulation of regimes of convection and plume dynamics by the thermal Lattice Boltzmann Method
NASA Astrophysics Data System (ADS)
Mora, Peter; Yuen, David A.
2018-02-01
We present 2D simulations using the Lattice Boltzmann Method (LBM) of a fluid in a rectangular box being heated from below, and cooled from above. We observe plumes, hot narrow upwellings from the base, and down-going cold chutes from the top. We have varied both the Rayleigh numbers and the Prandtl numbers respectively from Ra = 1000 to Ra =1010 , and Pr = 1 through Pr = 5 ×104 , leading to Rayleigh-Bénard convection cells at low Rayleigh numbers through to vigorous convection and unstable plumes with pronounced vortices and eddies at high Rayleigh numbers. We conduct simulations with high Prandtl numbers up to Pr = 50, 000 to simulate in the inertial regime. We find for cases when Pr ⩾ 100 that we obtain a series of narrow plumes of upwelling fluid with mushroom heads and chutes of downwelling fluid. We also present simulations at a Prandtl number of 0.7 for Rayleigh numbers varying from Ra =104 through Ra =107.5 . We demonstrate that the Nusselt number follows power law scaling of form Nu ∼Raγ where γ = 0.279 ± 0.002 , which is consistent with published results of γ = 0.281 in the literature. These results show that the LBM is capable of reproducing results obtained with classical macroscopic methods such as spectral methods, and demonstrate the great potential of the LBM for studying thermal convection and plume dynamics relevant to geodynamics.
High Performance Parallel Computational Nanotechnology
NASA Technical Reports Server (NTRS)
Saini, Subhash; Craw, James M. (Technical Monitor)
1995-01-01
At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.
Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems
NASA Technical Reports Server (NTRS)
Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.
2016-01-01
The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.
Zhu, Zhaozhong; Anttila, Verneri; Smoller, Jordan W; Lee, Phil H
2018-01-01
Advances in recent genome wide association studies (GWAS) suggest that pleiotropic effects on human complex traits are widespread. A number of classic and recent meta-analysis methods have been used to identify genetic loci with pleiotropic effects, but the overall performance of these methods is not well understood. In this work, we use extensive simulations and case studies of GWAS datasets to investigate the power and type-I error rates of ten meta-analysis methods. We specifically focus on three conditions commonly encountered in the studies of multiple traits: (1) extensive heterogeneity of genetic effects; (2) characterization of trait-specific association; and (3) inflated correlation of GWAS due to overlapping samples. Although the statistical power is highly variable under distinct study conditions, we found the superior power of several methods under diverse heterogeneity. In particular, classic fixed-effects model showed surprisingly good performance when a variant is associated with more than a half of study traits. As the number of traits with null effects increases, ASSET performed the best along with competitive specificity and sensitivity. With opposite directional effects, CPASSOC featured the first-rate power. However, caution is advised when using CPASSOC for studying genetically correlated traits with overlapping samples. We conclude with a discussion of unresolved issues and directions for future research.
Quantum simulation of quantum field theory using continuous variables
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George; ...
2015-12-14
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Quantum simulation of quantum field theory using continuous variables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, Kevin; Pooser, Raphael C.; Siopsis, George
Much progress has been made in the field of quantum computing using continuous variables over the last couple of years. This includes the generation of extremely large entangled cluster states (10,000 modes, in fact) as well as a fault tolerant architecture. This has lead to the point that continuous-variable quantum computing can indeed be thought of as a viable alternative for universal quantum computing. With that in mind, we present a new algorithm for continuous-variable quantum computers which gives an exponential speedup over the best known classical methods. Specifically, this relates to efficiently calculating the scattering amplitudes in scalar bosonicmore » quantum field theory, a problem that is known to be hard using a classical computer. Thus, we give an experimental implementation based on cluster states that is feasible with today's technology.« less
Stimulus Configuration, Classical Conditioning, and Hippocampal Function.
ERIC Educational Resources Information Center
Schmajuk, Nestor A.; DiCarlo, James J.
1991-01-01
The participation of the hippocampus in classical conditioning is described in terms of a multilayer network portraying stimulus configuration. A model of hippocampal function is presented, and computer simulations are used to study neural activity in the various brain areas mapped according to the model. (SLD)
Single-snapshot DOA estimation by using Compressed Sensing
NASA Astrophysics Data System (ADS)
Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin
2014-12-01
This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.
Computation in Classical Mechanics with Easy Java Simulations (EJS)
NASA Astrophysics Data System (ADS)
Cox, Anne J.
2006-12-01
Let your students enjoy creating animations and incorporating some computational physics into your Classical Mechanics course. This talk will demonstrate the use of an Open Source Physics package, Easy Java Simulations (EJS), in an already existing sophomore/junior level Classical Mechanics course. EJS allows for incremental introduction of computational physics into existing courses because it is easy to use (for instructors and students alike) and it is open source. Students can use this tool for numerical solutions to problems (as they can with commercial systems: Mathcad and Mathematica), but they can also generate their own animations. For example, students in Classical Mechanics use Lagrangian mechanics to solve a problem, and then use EJS not only to numerically solve the differential equations, but to show the associated motion (and check their answers). EJS, developed by Francisco Esquembre (http://fem.um.es/Ejs/), is built on the OpenSource Physics framework (http://www.opensourcephysics.org/) supported through NSF DUE0442581.
A finite difference model for free surface gravity drainage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couri, F.R.; Ramey, H.J. Jr.
1993-09-01
The unconfined gravity flow of liquid with a free surface into a well is a classical well test problem which has not been well understood by either hydrologists or petroleum engineers. Paradigms have led many authors to treat an incompressible flow as compressible flow to justify the delayed yield behavior of a time-drawdown test. A finite-difference model has been developed to simulate the free surface gravity flow of an unconfined single phase, infinitely large reservoir into a well. The model was verified with experimental results in sandbox models in the literature and with classical methods applied to observation wells inmore » the Groundwater literature. The simulator response was also compared with analytical Theis (1935) and Ramey et al. (1989) approaches for wellbore pressure at late producing times. The seepage face in the sandface and the delayed yield behavior were reproduced by the model considering a small liquid compressibility and incompressible porous medium. The potential buildup (recovery) simulated by the model evidenced a different- phenomenon from the drawdown, contrary to statements found in the Groundwater literature. Graphs of buildup potential vs time, buildup seepage face length vs time, and free surface head and sand bottom head radial profiles evidenced that the liquid refills the desaturating cone as a flat moving surface. The late time pseudo radial behavior was only approached after exaggerated long times.« less
Symmetrical Windowing for Quantum States in Quasi-Classical Trajectory Simulations
NASA Astrophysics Data System (ADS)
Cotton, Stephen Joshua
An approach has been developed for extracting approximate quantum state-to-state information from classical trajectory simulations which "quantizes" symmetrically both the initial and final classical actions associated with the degrees of freedom of interest using quantum number bins (or "window functions") which are significantly narrower than unit-width. This approach thus imposes a more stringent quantization condition on classical trajectory simulations than has been traditionally employed, while doing so in a manner that is time-symmetric and microscopically reversible. To demonstrate this "symmetric quasi-classical" (SQC) approach for a simple real system, collinear H + H2 reactive scattering calculations were performed [S.J. Cotton and W.H. Miller, J. Phys. Chem. A 117, 7190 (2013)] with SQC-quantization applied to the H 2 vibrational degree of freedom (DOF). It was seen that the use of window functions of approximately 1/2-unit width led to calculated reaction probabilities in very good agreement with quantum mechanical results over the threshold energy region, representing a significant improvement over what is obtained using the traditional quasi-classical procedure. The SQC approach was then applied [S.J. Cotton and W.H. Miller, J. Chem. Phys. 139, 234112 (2013)] to the much more interesting and challenging problem of incorporating non-adiabatic effects into what would otherwise be standard classical trajectory simulations. To do this, the classical Meyer-Miller (MM) Hamiltonian was used to model the electronic DOFs, with SQC-quantization applied to the classical "electronic" actions of the MM model---representing the occupations of the electronic states---in order to extract the electronic state population dynamics. It was demonstrated that if one ties the zero-point energy (ZPE) of the electronic DOFs to the SQC windowing function's width parameter this very simple SQC/MM approach is capable of quantitatively reproducing quantum mechanical results for a range of standard benchmark models of electronically non-adiabatic processes, including applications where "quantum" coherence effects are significant. Notably, among these benchmarks was the well-studied "spin-boson" model of condensed phase non-adiabatic dynamics, in both its symmetric and asymmetric forms---the latter of which many classical approaches fail to treat successfully. The SQC/MM approach to the treatment of non-adiabatic dynamics was next applied [S.J. Cotton, K. Igumenshchev, and W.H. Miller, J. Chem. Phys., 141, 084104 (2014)] to several recently proposed models of condensed phase electron transfer (ET) processes. For these problems, a flux-side correlation function framework modified for consistency with the SQC approach was developed for the calculation of thermal ET rate constants, and excellent accuracy was seen over wide ranges of non-adiabatic coupling strength and energetic bias/exothermicity. Significantly, the "inverted regime" in thermal rate constants (with increasing bias) known from Marcus Theory was reproduced quantitatively for these models---representing the successful treatment of another regime that classical approaches generally have difficulty in correctly describing. Relatedly, a model of photoinduced proton coupled electron transfer (PCET) was also addressed, and it was shown that the SQC/MM approach could reasonably model the explicit population dynamics of the photoexcited electron donor and acceptor states over the four parameter regimes considered. The potential utility of the SQC/MM technique lies in its stunning simplicity and the ease by which it may readily be incorporated into "ordinary" molecular dynamics (MD) simulations. In short, a typical MD simulation may be augmented to take non-adiabatic effects into account simply by introducing an auxiliary pair of classical "electronic" action-angle variables for each energetically viable Born-Oppenheimer surface, and time-evolving these auxiliary variables via Hamilton's equations (using the MM electronic Hamiltonian) in the same manner that the other classical variables---i.e., the coordinates of all the nuclei---are evolved forward in time. In a complex molecular system involving many hundreds or thousands of nuclear DOFs, the propagation of these extra "electronic" variables represents a modest increase in computational effort, and yet, the examples presented herein suggest that in many instances the SQC/MM approach will describe the true non-adiabatic quantum dynamics to a reasonable and useful degree of quantitative accuracy.
Quantum molecular dynamics simulations of dense matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, L.; Kress, J.; Troullier, N.
1997-12-31
The authors have developed a quantum molecular dynamics (QMD) simulation method for investigating the properties of dense matter in a variety of environments. The technique treats a periodically-replicated reference cell containing N atoms in which the nuclei move according to the classical equations-of-motion. The interatomic forces are generated from the quantum mechanical interactions of the (between?) electrons and nuclei. To generate these forces, the authors employ several methods of varying sophistication from the tight-binding (TB) to elaborate density functional (DF) schemes. In the latter case, lengthy simulations on the order of 200 atoms are routinely performed, while for the TB,more » which requires no self-consistency, upwards to 1000 atoms are systematically treated. The QMD method has been applied to a variety cases: (1) fluid/plasma Hydrogen from liquid density to 20 times volume-compressed for temperatures of a thousand to a million degrees Kelvin; (2) isotopic hydrogenic mixtures, (3) liquid metals (Li, Na, K); (4) impurities such as Argon in dense hydrogen plasmas; and (5) metal/insulator transitions in rare gas systems (Ar,Kr) under high compressions. The advent of parallel versions of the methods, especially for fast eigensolvers, presage LDA simulations in the range of 500--1000 atoms and TB runs for tens of thousands of particles. This leap should allow treatment of shock chemistry as well as large-scale mixtures of species in highly transient environments.« less
Probability evolution method for exit location distribution
NASA Astrophysics Data System (ADS)
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
Amatore, Christian; Oleinick, Alexander; Klymenko, Oleksiy V; Svir, Irina
2005-08-12
Herein, we propose a method for reconstructing any plausible macroscopic hydrodynamic flow profile occurring locally within a rectangular microfluidic channel. The method is based on experimental currents measured at single or double microband electrodes embedded in one channel wall. A perfectly adequate quasiconformal mapping of spatial coordinates introduced in our previous work [Electrochem. Commun. 2004, 6, 1123] and an exponentially expanding time grid, initially proposed [J. Electroanal. Chem. 2003, 557, 75] in conjunction with the solution of the corresponding variational problem approached by the Ritz method are used for the numerical reconstruction of flow profiles. Herein, the concept of the method is presented and developed theoretically and its validity is tested on the basis of the use of pseudoexperimental currents emulated by simulation of the diffusion-convection problem in a channel flow cell, to which a random Gaussian current noise is added. The flow profiles reconstructed by our method compare successfully with those introduced a priori into the simulations, even when these include significant distortions compared with either classical Poiseuille or electro-osmotic flows.
Experimentally modeling stochastic processes with less memory by the use of a quantum processor
Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.
2017-01-01
Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218
A Spectral Element Discretisation on Unstructured Triangle / Tetrahedral Meshes for Elastodynamics
NASA Astrophysics Data System (ADS)
May, Dave A.; Gabriel, Alice-A.
2017-04-01
The spectral element method (SEM) defined over quadrilateral and hexahedral element geometries has proven to be a fast, accurate and scalable approach to study wave propagation phenomena. In the context of regional scale seismology and or simulations incorporating finite earthquake sources, the geometric restrictions associated with hexahedral elements can limit the applicability of the classical quad./hex. SEM. Here we describe a continuous Galerkin spectral element discretisation defined over unstructured meshes composed of triangles (2D), or tetrahedra (3D). The method uses a stable, nodal basis constructed from PKD polynomials and thus retains the spectral accuracy and low dispersive properties of the classical SEM, in addition to the geometric versatility provided by unstructured simplex meshes. For the particular basis and quadrature rule we have adopted, the discretisation results in a mass matrix which is not diagonal, thereby mandating linear solvers be utilised. To that end, we have developed efficient solvers and preconditioners which are robust with respect to the polynomial order (p), and possess high arithmetic intensity. Furthermore, we also consider using implicit time integrators, together with a p-multigrid preconditioner to circumvent the CFL condition. Implicit time integrators become particularly relevant when considering solving problems on poor quality meshes, or meshes containing elements with a widely varying range of length scales - both of which frequently arise when meshing non-trivial geometries. We demonstrate the applicability of the new method by examining a number of two- and three-dimensional wave propagation scenarios. These scenarios serve to characterise the accuracy and cost of the new method. Lastly, we will assess the potential benefits of using implicit time integrators for regional scale wave propagation simulations.
Efficient Variational Quantum Simulator Incorporating Active Error Minimization
NASA Astrophysics Data System (ADS)
Li, Ying; Benjamin, Simon C.
2017-04-01
One of the key applications for quantum computers will be the simulation of other quantum systems that arise in chemistry, materials science, etc., in order to accelerate the process of discovery. It is important to ask the following question: Can this simulation be achieved using near-future quantum processors, of modest size and under imperfect control, or must it await the more distant era of large-scale fault-tolerant quantum computing? Here, we propose a variational method involving closely integrated classical and quantum coprocessors. We presume that all operations in the quantum coprocessor are prone to error. The impact of such errors is minimized by boosting them artificially and then extrapolating to the zero-error case. In comparison to a more conventional optimized Trotterization technique, we find that our protocol is efficient and appears to be fundamentally more robust against error accumulation.
Effects of two-temperature model on cascade evolution in Ni and NiFe
Samolyuk, German D.; Xue, Haizhou; Bei, Hongbin; ...
2016-07-05
We perform molecular dynamics simulations of Ni ion cascades in Ni and equiatomic NiFe under the following conditions: (a) classical molecular dynamics (MD) simulations without consideration of electronic energy loss, (b) classical MD simulations with the electronic stopping included, and (c) using the coupled two-temperature MD (2T-MD) model that incorporates both the electronic stopping and the electron-phonon interactions. Our results indicate that the electronic effects are more profound in the higher-energy cascades, and that the 2T-MD model results in a smaller amount of surviving damage and smaller defect clusters, while less damage is produced in NiFe than in Ni.
Pauli structures arising from confined particles interacting via a statistical potential
NASA Astrophysics Data System (ADS)
Batle, Josep; Ciftja, Orion; Farouk, Ahmed; Alkhambashi, Majid; Abdalla, Soliman
2017-09-01
There have been suggestions that the Pauli exclusion principle alone can lead a non-interacting (free) system of identical fermions to form crystalline structures dubbed Pauli crystals. Single-shot imaging experiments for the case of ultra-cold systems of free spin-polarized fermionic atoms in a two-dimensional harmonic trap appear to show geometric arrangements that cannot be characterized as Wigner crystals. This work explores this idea and considers a well-known approach that enables one to treat a quantum system of free fermions as a system of classical particles interacting with a statistical interaction potential. The model under consideration, though classical in nature, incorporates the quantum statistics by endowing the classical particles with an effective interaction potential. The reasonable expectation is that possible Pauli crystal features seen in experiments may manifest in this model that captures the correct quantum statistics as a first order correction. We use the Monte Carlo simulated annealing method to obtain the most stable configurations of finite two-dimensional systems of confined particles that interact with an appropriate statistical repulsion potential. We consider both an isotropic harmonic and a hard-wall confinement potential. Despite minor differences, the most stable configurations observed in our model correspond to the reported Pauli crystals in single-shot imaging experiments of free spin-polarized fermions in a harmonic trap. The crystalline configurations observed appear to be different from the expected classical Wigner crystal structures that would emerge should the confined classical particles had interacted with a pair-wise Coulomb repulsion.
Collective Phase in Resource Competition in a Highly Diverse Ecosystem.
Tikhonov, Mikhail; Monasson, Remi
2017-01-27
Organisms shape their own environment, which in turn affects their survival. This feedback becomes especially important for communities containing a large number of species; however, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to analytically solve a classic ecological model of resource competition introduced by MacArthur in 1969. We show that the nonintuitive phenomenology of highly diverse ecosystems includes a phase where the environment constructed by the community becomes fully decoupled from the outside world.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
Thrust vector control algorithm design for the Cassini spacecraft
NASA Technical Reports Server (NTRS)
Enright, Paul J.
1993-01-01
This paper describes a preliminary design of the thrust vector control algorithm for the interplanetary spacecraft, Cassini. Topics of discussion include flight software architecture, modeling of sensors, actuators, and vehicle dynamics, and controller design and analysis via classical methods. Special attention is paid to potential interactions with structural flexibilities and propellant dynamics. Controller performance is evaluated in a simulation environment built around a multi-body dynamics model, which contains nonlinear models of the relevant hardware and preliminary versions of supporting attitude determination and control functions.
THE FIRST FERMI IN A HIGH ENERGY NUCLEAR COLLISION.
DOE Office of Scientific and Technical Information (OSTI.GOV)
KRASNITZ,A.
1999-08-09
At very high energies, weak coupling, non-perturbative methods can be used to study classical gluon production in nuclear collisions. One observes in numerical simulations that after an initial formation time, the produced partons are on shell, and their subsequent evolution can be studied using transport theory. At the initial formation time, a simple non-perturbative relation exists between the energy and number densities of the produced partons, and a scale determined by the saturated parton density in the nucleus.
NASA Technical Reports Server (NTRS)
Strganac, T. W.; Mook, D. T.
1986-01-01
A means of numerically simulating flutter is established by implementing a predictor-corrector algorithm to solve the equations of motion. Aerodynamic loads are provided by the unsteady vortex lattice method (UVLM). This method is illustrated via the obtainment of stable and unstable responses to initial disturbances in the case of two-degree-of-freedom motion. It was found that for some angles of attack and dynamic pressure, the initial disturbance decays, for others it grows (flutter). When flutter occurs, the solution yields the amplitude and period of the resulting limit cycle. The preliminaray results attest to the feasibility of this method for studying flutter in cases that would be difficult to treat using a classical approach.
Csf Based Non-Ground Points Extraction from LIDAR Data
NASA Astrophysics Data System (ADS)
Shen, A.; Zhang, W.; Shi, H.
2017-09-01
Region growing is a classical method of point cloud segmentation. Based on the idea of collecting the pixels with similar properties to form regions, region growing is widely used in many fields such as medicine, forestry and remote sensing. In this algorithm, there are two core problems. One is the selection of seed points, the other is the setting of the growth constraints, in which the selection of the seed points is the foundation. In this paper, we propose a CSF (Cloth Simulation Filtering) based method to extract the non-ground seed points effectively. The experiments have shown that this method can obtain a group of seed spots compared with the traditional methods. It is a new attempt to extract seed points
Stability of rigid rotors supported by air foil bearings: Comparison of two fundamental approaches
NASA Astrophysics Data System (ADS)
Larsen, Jon S.; Santos, Ilmar F.; von Osmanski, Sebastian
2016-10-01
High speed direct drive motors enable the use of Air Foil Bearings (AFB) in a wide range of applications due to the elimination of gear forces. Unfortunately, AFB supported rotors are lightly damped, and an accurate prediction of their Onset Speed of Instability (OSI) is therefore important. This paper compares two fundamental methods for predicting the OSI. One is based on a nonlinear time domain simulation and another is based on a linearised frequency domain method and a perturbation of the Reynolds equation. Both methods are based on equivalent models and should predict similar results. Significant discrepancies are observed leading to the question, is the classical frequency domain method sufficiently accurate? The discrepancies and possible explanations are discussed in detail.
NASA Astrophysics Data System (ADS)
Walter, Nathan P.; Jaiswal, Abhishek; Cai, Zhikun; Zhang, Yang
2018-07-01
Neutron scattering is a powerful experimental technique for characterizing the structure and dynamics of materials on the atomic or molecular scale. However, the interpretation of experimental data from neutron scattering is oftentimes not trivial, partly because scattering methods probe ensemble-averaged information in the reciprocal space. Therefore, computer simulations, such as classical and ab initio molecular dynamics, are frequently used to unravel the time-dependent atomistic configurations that can reproduce the scattering patterns and thus assist in the understanding of the microscopic origin of certain properties of materials. LiquidLib is a post-processing package for analyzing the trajectory of atomistic simulations of liquids and liquid-like matter with application to neutron scattering experiments. From an atomistic simulation, LiquidLib provides the computation of various statistical quantities including the pair distribution function, the weighted and unweighted structure factors, the mean squared displacement, the non-Gaussian parameter, the four-point correlation function, the velocity auto correlation function, the self and collective van Hove correlation functions, the self and collective intermediate scattering functions, and the bond orientational order parameter. LiquidLib analyzes atomistic trajectories generated from packages such as LAMMPS, GROMACS, and VASP. It also offers an extendable platform to conveniently integrate new quantities into the library and integrate simulation trajectories of other file formats for analysis. Weighting the quantities by element-specific neutron-scattering lengths provides results directly comparable to neutron scattering measurements. Lastly, LiquidLib is independent of dimensionality, which allows analysis of trajectories in two, three, and higher dimensions. The code is beginning to find worldwide use.
NASA Astrophysics Data System (ADS)
Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.
2017-07-01
A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.
Algorithms of GPU-enabled reactive force field (ReaxFF) molecular dynamics.
Zheng, Mo; Li, Xiaoxia; Guo, Li
2013-04-01
Reactive force field (ReaxFF), a recent and novel bond order potential, allows for reactive molecular dynamics (ReaxFF MD) simulations for modeling larger and more complex molecular systems involving chemical reactions when compared with computation intensive quantum mechanical methods. However, ReaxFF MD can be approximately 10-50 times slower than classical MD due to its explicit modeling of bond forming and breaking, the dynamic charge equilibration at each time-step, and its one order smaller time-step than the classical MD, all of which pose significant computational challenges in simulation capability to reach spatio-temporal scales of nanometers and nanoseconds. The very recent advances of graphics processing unit (GPU) provide not only highly favorable performance for GPU enabled MD programs compared with CPU implementations but also an opportunity to manage with the computing power and memory demanding nature imposed on computer hardware by ReaxFF MD. In this paper, we present the algorithms of GMD-Reax, the first GPU enabled ReaxFF MD program with significantly improved performance surpassing CPU implementations on desktop workstations. The performance of GMD-Reax has been benchmarked on a PC equipped with a NVIDIA C2050 GPU for coal pyrolysis simulation systems with atoms ranging from 1378 to 27,283. GMD-Reax achieved speedups as high as 12 times faster than Duin et al.'s FORTRAN codes in Lammps on 8 CPU cores and 6 times faster than the Lammps' C codes based on PuReMD in terms of the simulation time per time-step averaged over 100 steps. GMD-Reax could be used as a new and efficient computational tool for exploiting very complex molecular reactions via ReaxFF MD simulation on desktop workstations. Copyright © 2013 Elsevier Inc. All rights reserved.
Quantum diffusion of H/D on Ni(111)—A partially adiabatic centroid MD study
NASA Astrophysics Data System (ADS)
Hopkinson, A. R.; Probert, M. I. J.
2018-03-01
We present the results of a theoretical study of H/D diffusion on a Ni(111) surface at a range of temperatures, from 250 K to 75 K. The diffusion is studied using both classical molecular dynamics and the partially adiabatic centroid molecular dynamics method. The calculations are performed with the hydrogen (or deuterium) moving in 3D across a static nickel surface using a novel Fourier interpolated potential energy surface which has been parameterized to density functional theory calculations. The results of the classical simulations are that the calculated diffusion coefficients are far too small and with too large a variation with temperature compared with experiment. By contrast, the quantum simulations are in much better agreement with experiment and show that quantum effects in the diffusion of hydrogen are significant at all temperatures studied. There is also a crossover to a quantum-dominated diffusive regime for temperatures below ˜150 K for hydrogen and ˜85 K for deuterium. The quantum diffusion coefficients are found to accurately reproduce the spread in values with temperature, but with an absolute value that is a little high compared with experiment.
Solvent effects on the properties of hyperbranched polythiophenes.
Torras, Juan; Zanuy, David; Aradilla, David; Alemán, Carlos
2016-09-21
The structural and electronic properties of all-thiophene dendrimers and dendrons in solution have been evaluated using very different theoretical approaches based on quantum mechanical (QM) and hybrid QM/molecular mechanics (MM) methodologies: (i) calculations on minimum energy conformations using an implicit solvation model in combination with density functional theory (DFT) or time-dependent DFT (TD-DFT) methods; (ii) hybrid QM/MM calculations, in which the solute and solvent molecules are represented at the DFT level as point charges, respectively, on snapshots extracted from classical molecular dynamics (MD) simulations using explicit solvent molecules, and (iii) QM/MM-MD trajectories in which the solute is described at the DFT or TD-DFT level and the explicit solvent molecules are represented using classical force-fields. Calculations have been performed in dichloromethane, tetrahydrofuran and dimethylformamide. A comparison of the results obtained using the different approaches with the available experimental data indicates that the incorporation of effects associated with both the conformational dynamics of the dendrimer and the explicit solvent molecules is strictly necessary to satisfactorily reproduce the properties of the investigated systems. Accordingly, QM/MM-MD simulations are able to capture such effects providing a reliable description of electronic properties-conformational flexibility relationships in all-Th dendrimers.
Accurate classical short-range forces for the study of collision cascades in Fe–Ni–Cr
Béland, Laurent Karim; Tamm, Artur; Mu, Sai; ...
2017-05-10
The predictive power of a classical molecular dynamics simulation is largely determined by the physical validity of its underlying empirical potential. In the case of high-energy collision cascades, it was recently shown that correctly modeling interactions at short distances is necessary to accurately predict primary damage production. An ab initio based framework is introduced for modifying an existing embedded-atom method FeNiCr potential to handle these short-range interactions. Density functional theory is used to calculate the energetics of two atoms approaching each other, embedded in the alloy, and to calculate the equation of state of the alloy as it is compressed.more » The pairwise terms and the embedding terms of the potential are modi ed in accordance with the ab initio results. Using this reparametrized potential, collision cascades are performed in Ni 50Fe 50, Ni 80Cr 20 and Ni 33Fe 33Cr 33. The simulations reveal that alloying Ni and NiCr to Fe reduces primary damage production, in agreement with some previous calculations. Alloying Ni and NiFe to Cr does not reduce primary damage production, in contradiction with previous calculations.« less
NASA Astrophysics Data System (ADS)
Harvey, J.-P.; Gheribi, A. E.; Chartrand, P.
2012-12-01
In this work, an in silico procedure to generate a fully coherent set of thermodynamic properties obtained from classical molecular dynamics (MD) and Monte Carlo (MC) simulations is proposed. The procedure is applied to the Al-Zr system because of its importance in the development of high strength Al-Li alloys and of bulk metallic glasses. Cohesive energies of the studied condensed phases of the Al-Zr system (the liquid phase, the fcc solid solution, and various orthorhombic stoichiometric compounds) are calculated using the modified embedded atom model (MEAM) in the second-nearest-neighbor formalism (2NN). The Al-Zr MEAM-2NN potential is parameterized in this work using ab initio and experimental data found in the literature for the AlZr3-L12 structure, while its predictive ability is confirmed for several other solid structures and for the liquid phase. The thermodynamic integration (TI) method is implemented in a general MC algorithm in order to evaluate the absolute Gibbs energy of the liquid and the fcc solutions. The entropy of mixing calculated from the TI method, combined to the enthalpy of mixing and the heat capacity data generated from MD/MC simulations performed in the isobaric-isothermal/canonical (NPT/NVT) ensembles are used to parameterize the Gibbs energy function of all the condensed phases in the Al-rich side of the Al-Zr system in a CALculation of PHAse Diagrams (CALPHAD) approach. The modified quasichemical model in the pair approximation (MQMPA) and the cluster variation method (CVM) in the tetrahedron approximation are used to define the Gibbs energy of the liquid and the fcc solid solution respectively for their entire range of composition. Thermodynamic and structural data generated from our MD/MC simulations are used as input data to parameterize these thermodynamic models. A detailed analysis of the validity and transferability of the Al-Zr MEAM-2NN potential is presented throughout our work by comparing the predicted properties obtained from this formalism with available ab initio and experimental data for both liquid and solid phases.
Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).
NASA Astrophysics Data System (ADS)
Sutikno, Priyono
2010-06-01
Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner, medium head Francis type, with following value for the most important parameter, the rated specific speed ns.
A multi-state trajectory method for non-adiabatic dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Guohua, E-mail: taogh@pkusz.edu.cn
2016-03-07
A multi-state trajectory approach is proposed to describe nuclear-electron coupled dynamics in nonadiabatic simulations. In this approach, each electronic state is associated with an individual trajectory, among which electronic transition occurs. The set of these individual trajectories constitutes a multi-state trajectory, and nuclear dynamics is described by one of these individual trajectories as the system is on the corresponding state. The total nuclear-electron coupled dynamics is obtained from the ensemble average of the multi-state trajectories. A variety of benchmark systems such as the spin-boson system have been tested and the results generated using the quasi-classical version of the method showmore » reasonably good agreement with the exact quantum calculations. Featured in a clear multi-state picture, high efficiency, and excellent numerical stability, the proposed method may have advantages in being implemented to realistic complex molecular systems, and it could be straightforwardly applied to general nonadiabatic dynamics involving multiple states.« less
Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2017-11-01
The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steenbergen, K. G., E-mail: kgsteen@gmail.com; Gaston, N.
2014-02-14
Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement formore » a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.« less
NASA Astrophysics Data System (ADS)
Shi, L.; Skinner, J. L.
2015-07-01
OH-stretch inelastic incoherent neutron scattering (IINS) has been measured to determine the vibrational density of states (VDOS) in the OH-stretch region for liquid water, supercooled water, and ice Ih, providing complementary information to IR and Raman spectroscopies about hydrogen bonding in these phases. In this work, we extend the combined electronic-structure/molecular-dynamics (ES/MD) method, originally developed by Skinner and co-workers to simulate OH-stretch IR and Raman spectra, to the calculation of IINS spectra with small k values. The agreement between theory and experiment in the limit k → 0 is reasonable, further validating the reliability of the ES/MD method in simulating OH-stretch spectroscopy in condensed phases. The connections and differences between IINS and IR spectra are analyzed to illustrate the advantages of IINS over IR in estimating the OH-stretch VDOS.
Neural system modeling and simulation using Hybrid Functional Petri Net.
Tang, Yin; Wang, Fei
2012-02-01
The Petri net formalism has been proved to be powerful in biological modeling. It not only boasts of a most intuitive graphical presentation but also combines the methods of classical systems biology with the discrete modeling technique. Hybrid Functional Petri Net (HFPN) was proposed specially for biological system modeling. An array of well-constructed biological models using HFPN yielded very interesting results. In this paper, we propose a method to represent neural system behavior, where biochemistry and electrical chemistry are both included using the Petri net formalism. We built a model for the adrenergic system using HFPN and employed quantitative analysis. Our simulation results match the biological data well, showing that the model is very effective. Predictions made on our model further manifest the modeling power of HFPN and improve the understanding of the adrenergic system. The file of our model and more results with their analysis are available in our supplementary material.
Molecular dynamics simulations of bubble formation and cavitation in liquid metals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Insepov, Z.; Hassanein, A.; Bazhirov, T. T.
2007-11-01
Thermodynamics and kinetics of nano-scale bubble formation in liquid metals such as Li and Pb were studied by molecular dynamics (MD) simulations at pressures typical for magnetic and inertial fusion. Two different approaches to bubble formation were developed. In one method, radial densities, pressures, surface tensions, and work functions of the cavities in supercooled liquid lithium were calculated and compared with the surface tension experimental data. The critical radius of a stable cavity in liquid lithium was found for the first time. In the second method, the cavities were created in the highly stretched region of the liquid phase diagram;more » and then the stability boundary and the cavitation rates were calculated in liquid lead. The pressure dependences of cavitation frequencies were obtained over the temperature range 700-2700 K in liquid Pb. The results of MD calculations for cavitation rate were compared with estimates of classical nucleation theory (CNT).« less
Steenbergen, K G; Gaston, N
2014-02-14
Inspired by methods of remote sensing image analysis, we analyze structural variation in cluster molecular dynamics (MD) simulations through a unique application of the principal component analysis (PCA) and Pearson Correlation Coefficient (PCC). The PCA analysis characterizes the geometric shape of the cluster structure at each time step, yielding a detailed and quantitative measure of structural stability and variation at finite temperature. Our PCC analysis captures bond structure variation in MD, which can be used to both supplement the PCA analysis as well as compare bond patterns between different cluster sizes. Relying only on atomic position data, without requirement for a priori structural input, PCA and PCC can be used to analyze both classical and ab initio MD simulations for any cluster composition or electronic configuration. Taken together, these statistical tools represent powerful new techniques for quantitative structural characterization and isomer identification in cluster MD.
Molecular Dynamics Simulations of Simple Liquids
ERIC Educational Resources Information Center
Speer, Owner F.; Wengerter, Brian C.; Taylor, Ramona S.
2004-01-01
An experiment, in which students were given the opportunity to perform molecular dynamics simulations on a series of molecular liquids using the Amber suite of programs, is presented. They were introduced to both physical theories underlying classical mechanics simulations and to the atom-atom pair distribution function.
Scalar flux modeling in turbulent flames using iterative deconvolution
NASA Astrophysics Data System (ADS)
Nikolaou, Z. M.; Cant, R. S.; Vervisch, L.
2018-04-01
In the context of large eddy simulations, deconvolution is an attractive alternative for modeling the unclosed terms appearing in the filtered governing equations. Such methods have been used in a number of studies for non-reacting and incompressible flows; however, their application in reacting flows is limited in comparison. Deconvolution methods originate from clearly defined operations, and in theory they can be used in order to model any unclosed term in the filtered equations including the scalar flux. In this study, an iterative deconvolution algorithm is used in order to provide a closure for the scalar flux term in a turbulent premixed flame by explicitly filtering the deconvoluted fields. The assessment of the method is conducted a priori using a three-dimensional direct numerical simulation database of a turbulent freely propagating premixed flame in a canonical configuration. In contrast to most classical a priori studies, the assessment is more stringent as it is performed on a much coarser mesh which is constructed using the filtered fields as obtained from the direct simulations. For the conditions tested in this study, deconvolution is found to provide good estimates both of the scalar flux and of its divergence.
A New Computational Technique for the Generation of Optimised Aircraft Trajectories
NASA Astrophysics Data System (ADS)
Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto
2017-12-01
A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.
Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array
Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Tao, Yuan
2018-01-01
Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%. PMID:29734742
Study of Current Measurement Method Based on Circular Magnetic Field Sensing Array.
Li, Zhenhua; Zhang, Siqiu; Wu, Zhengtian; Abu-Siada, Ahmed; Tao, Yuan
2018-05-05
Classic core-based instrument transformers are more prone to magnetic saturation. This affects the measurement accuracy of such transformers and limits their applications in measuring large direct current (DC). Moreover, protection and control systems may exhibit malfunctions due to such measurement errors. This paper presents a more accurate method for current measurement based on a circular magnetic field sensing array. The proposed measurement approach utilizes multiple hall sensors that are evenly distributed on a circle. The average value of all hall sensors is regarded as the final measurement. The calculation model is established in the case of magnetic field interference of the parallel wire, and the simulation results show that the error decreases significantly when the number of hall sensors n is greater than 8. The measurement error is less than 0.06% when the wire spacing is greater than 2.5 times the radius of the sensor array. A simulation study on the off-center primary conductor is conducted, and a kind of hall sensor compensation method is adopted to improve the accuracy. The simulation and test results indicate that the measurement error of the system is less than 0.1%.
Event-driven Monte Carlo: Exact dynamics at all time scales for discrete-variable models
NASA Astrophysics Data System (ADS)
Mendoza-Coto, Alejandro; Díaz-Méndez, Rogelio; Pupillo, Guido
2016-06-01
We present an algorithm for the simulation of the exact real-time dynamics of classical many-body systems with discrete energy levels. In the same spirit of kinetic Monte Carlo methods, a stochastic solution of the master equation is found, with no need to define any other phase-space construction. However, unlike existing methods, the present algorithm does not assume any particular statistical distribution to perform moves or to advance the time, and thus is a unique tool for the numerical exploration of fast and ultra-fast dynamical regimes. By decomposing the problem in a set of two-level subsystems, we find a natural variable step size, that is well defined from the normalization condition of the transition probabilities between the levels. We successfully test the algorithm with known exact solutions for non-equilibrium dynamics and equilibrium thermodynamical properties of Ising-spin models in one and two dimensions, and compare to standard implementations of kinetic Monte Carlo methods. The present algorithm is directly applicable to the study of the real-time dynamics of a large class of classical Markovian chains, and particularly to short-time situations where the exact evolution is relevant.
Field-programmable analogue arrays for the sensorless control of DC motors
NASA Astrophysics Data System (ADS)
Rivera, J.; Dueñas, I.; Ortega, S.; Del Valle, J. L.
2018-02-01
This work presents the analogue implementation of a sensorless controller for direct current motors based on the super-twisting (ST) sliding mode technique, by means of field programmable analogue arrays (FPAA). The novelty of this work is twofold, first is the use of the ST algorithm in a sensorless scheme for DC motors, and the implementation method of this type of sliding mode controllers in FPAAs. The ST algorithm reduces the chattering problem produced with the deliberate use of the sign function in classical sliding mode approaches. On the other hand, the advantages of the implementation method over a digital one are that the controller is not digitally approximated, the controller gains are not fine tuned and the implementation does not require the use of analogue-to-digital and digital-to-analogue converter circuits. In addition to this, the FPAA is a reconfigurable, lower cost and power consumption technology. Simulation and experimentation results were registered, where a more accurate transient response and lower power consumption were obtained by the proposed implementation method when compared to a digital implementation. Also, a more accurate performance by the DC motor is obtained with proposed sensorless ST technique when compared with a classical sliding mode approach.
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles
NASA Astrophysics Data System (ADS)
Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J.
2017-09-01
The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.
Simulations of molecular diffusion in lattices of cells: insights for NMR of red blood cells.
Regan, David G; Kuchel, Philip W
2002-01-01
The pulsed field-gradient spin-echo (PGSE) nuclear magnetic resonance (NMR) experiment, conducted on a suspension of red blood cells (RBC) in a strong magnetic field yields a q-space plot consisting of a series of maxima and minima. This is mathematically analogous to a classical optical diffraction pattern. The method provides a noninvasive and novel means of characterizing cell suspensions that is sensitive to changes in cell shape and packing density. The positions of the features in a q-space plot characterize the rate of exchange across the membrane, cell dimensions, and packing density. A diffusion tensor, containing information regarding the diffusion anisotropy of the system, can also be derived from the PGSE NMR data. In this study, we carried out Monte Carlo simulations of diffusion in suspensions of "virtual" cells that had either biconcave disc (as in RBC) or oblate spheroid geometry. The simulations were performed in a PGSE NMR context thus enabling predictions of q-space and diffusion tensor data. The simulated data were compared with those from real PGSE NMR diffusion experiments on RBC suspensions that had a range of hematocrit values. Methods that facilitate the processing of q-space data were also developed. PMID:12080109
Direct numerical simulation of cellular-scale blood flow in microvascular networks
NASA Astrophysics Data System (ADS)
Balogh, Peter; Bagchi, Prosenjit
2017-11-01
A direct numerical simulation method is developed to study cellular-scale blood flow in physiologically realistic microvascular networks that are constructed in silico following published in vivo images and data, and are comprised of bifurcating, merging, and winding vessels. The model resolves large deformation of individual red blood cells (RBC) flowing in such complex networks. The vascular walls and deformable interfaces of the RBCs are modeled using the immersed-boundary methods. Time-averaged hemodynamic quantities obtained from the simulations agree quite well with published in vivo data. Our simulations reveal that in several vessels the flow rates and pressure drops could be negatively correlated. The flow resistance and hematocrit are also found to be negatively correlated in some vessels. These observations suggest a deviation from the classical Poiseuille's law in such vessels. The cells are observed to frequently jam at vascular bifurcations resulting in reductions in hematocrit and flow rate in the daughter and mother vessels. We find that RBC jamming results in several orders of magnitude increase in hemodynamic resistance, and thus provides an additional mechanism of increased in vivo blood viscosity as compared to that determined in vitro. Funded by NSF CBET 1604308.
Fluids density functional theory and initializing molecular dynamics simulations of block copolymers
NASA Astrophysics Data System (ADS)
Brown, Jonathan R.; Seo, Youngmi; Maula, Tiara Ann D.; Hall, Lisa M.
2016-03-01
Classical, fluids density functional theory (fDFT), which can predict the equilibrium density profiles of polymeric systems, and coarse-grained molecular dynamics (MD) simulations, which are often used to show both structure and dynamics of soft materials, can be implemented using very similar bead-based polymer models. We aim to use fDFT and MD in tandem to examine the same system from these two points of view and take advantage of the different features of each methodology. Additionally, the density profiles resulting from fDFT calculations can be used to initialize the MD simulations in a close to equilibrated structure, speeding up the simulations. Here, we show how this method can be applied to study microphase separated states of both typical diblock and tapered diblock copolymers in which there is a region with a gradient in composition placed between the pure blocks. Both methods, applied at constant pressure, predict a decrease in total density as segregation strength or the length of the tapered region is increased. The predictions for the density profiles from fDFT and MD are similar across materials with a wide range of interfacial widths.
Simulations of molecular diffusion in lattices of cells: insights for NMR of red blood cells.
Regan, David G; Kuchel, Philip W
2002-07-01
The pulsed field-gradient spin-echo (PGSE) nuclear magnetic resonance (NMR) experiment, conducted on a suspension of red blood cells (RBC) in a strong magnetic field yields a q-space plot consisting of a series of maxima and minima. This is mathematically analogous to a classical optical diffraction pattern. The method provides a noninvasive and novel means of characterizing cell suspensions that is sensitive to changes in cell shape and packing density. The positions of the features in a q-space plot characterize the rate of exchange across the membrane, cell dimensions, and packing density. A diffusion tensor, containing information regarding the diffusion anisotropy of the system, can also be derived from the PGSE NMR data. In this study, we carried out Monte Carlo simulations of diffusion in suspensions of "virtual" cells that had either biconcave disc (as in RBC) or oblate spheroid geometry. The simulations were performed in a PGSE NMR context thus enabling predictions of q-space and diffusion tensor data. The simulated data were compared with those from real PGSE NMR diffusion experiments on RBC suspensions that had a range of hematocrit values. Methods that facilitate the processing of q-space data were also developed.
Robust, Practical Adaptive Control for Launch Vehicles
NASA Technical Reports Server (NTRS)
Orr, Jeb. S.; VanZwieten, Tannen S.
2012-01-01
A modern mechanization of a classical adaptive control concept is presented with an application to launch vehicle attitude control systems. Due to a rigorous flight certification environment, many adaptive control concepts are infeasible when applied to high-risk aerospace systems; methods of stability analysis are either intractable for high complexity models or cannot be reconciled in light of classical requirements. Furthermore, many adaptive techniques appearing in the literature are not suitable for application to conditionally stable systems with complex flexible-body dynamics, as is often the case with launch vehicles. The present technique is a multiplicative forward loop gain adaptive law similar to that used for the NASA X-15 flight research vehicle. In digital implementation with several novel features, it is well-suited to application on aerodynamically unstable launch vehicles with thrust vector control via augmentation of the baseline attitude/attitude-rate feedback control scheme. The approach is compatible with standard design features of autopilots for launch vehicles, including phase stabilization of lateral bending and slosh via linear filters. In addition, the method of assessing flight control stability via classical gain and phase margins is not affected under reasonable assumptions. The algorithm s ability to recover from certain unstable operating regimes can in fact be understood in terms of frequency-domain criteria. Finally, simulation results are presented that confirm the ability of the algorithm to improve performance and robustness in realistic failure scenarios.
Quantum image median filtering in the spatial domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Liu, Xiande; Xiao, Hong
2018-03-01
Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.
Spin polarization of two-dimensional electron system in parabolic potential
NASA Astrophysics Data System (ADS)
Miyake, Takashi; Totsuji, Chieko; Nakanishi, Kenta; Tsuruta, Kenji; Totsuji, Hiroo
2008-09-01
We analyze the ground state of the two-dimensional quantum system of electrons confined in a parabolic potential with the system size around 100 at 0 K. We map the system onto a classical system on the basis of the classical-map hypernetted-chain (CHNC) method which has been proven to work in the integral-equation-based analyses of uniform systems and apply classical Monte Carlo and molecular dynamics simulations. We find that, when we decrease the strength of confinement keeping the number of confined electrons fixed, the energy of the spin-polarized state with somewhat lower average density becomes smaller than that of the spin-unpolarized state with somewhat higher average density. This system thus undergoes the transition from the spin-unpolarized state to the spin polarized state and the corresponding critical value of r estimated from the average density is as low as r∼0.4 which is much smaller than the r value for the Wigner lattice formation. When we compare the energies of spin-unpolarized and spin-polarized states for given average density, our data give the critical r value for the transition between unpolarized and polarized states around 10 which is close to but still smaller than the known possibility of polarization at r∼27. The advantage of our method is a direct applicability to geometrically complex systems which are difficult to analyze by integral equations and this is an example.
2D Quantum Transport Modeling in Nanoscale MOSFETs
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Anantram, M. P.; Govindan, T. R.; Biegel, Bryan
2001-01-01
With the onset of quantum confinement in the inversion layer in nanoscale MOSFETs, behavior of the resonant level inevitably determines all device characteristics. While most classical device simulators take quantization into account in some simplified manner, the important details of electrostatics are missing. Our work addresses this shortcoming and provides: (a) a framework to quantitatively explore device physics issues such as the source-drain and gate leakage currents, DIBL, and threshold voltage shift due to quantization, and b) a means of benchmarking quantum corrections to semiclassical models (such as density- gradient and quantum-corrected MEDICI). We have developed physical approximations and computer code capable of realistically simulating 2-D nanoscale transistors, using the non-equilibrium Green's function (NEGF) method. This is the most accurate full quantum model yet applied to 2-D device simulation. Open boundary conditions, oxide tunneling and phase-breaking scattering are treated on equal footing. Electrons in the ellipsoids of the conduction band are treated within the anisotropic effective mass approximation. Quantum simulations are focused on MIT 25, 50 and 90 nm "well- tempered" MOSFETs and compared to classical and quantum corrected models. The important feature of quantum model is smaller slope of Id-Vg curve and consequently higher threshold voltage. These results are quantitatively consistent with I D Schroedinger-Poisson calculations. The effect of gate length on gate-oxide leakage and sub-threshold current has been studied. The shorter gate length device has an order of magnitude smaller current at zero gate bias than the longer gate length device without a significant trade-off in on-current. This should be a device design consideration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, A. J.; Tourret, D.; Song, Y.
We study microstructure selection during during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. Here we explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Λ and tip radius ρ. While ρ shows a good agreement between experimental measurements and dendrite growth theory, with ρ~V $-$1/2, Λ is observed to increase with V (∂Λ/∂V > 0), in apparent disagreement with classical scaling laws formore » primary dendritic spacing, which predict that ∂Λ/∂V<0. We show through simulations that this trend inversion for Λ(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. Lastly, we explain the observed inversion of ∂Λ/∂V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clarke, A. J.; Tourret, D.; Song, Y.
We study microstructure selection during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. We explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Lambda and tip radius rho. While rho shows a good agreement between experimental measurements and dendrite growth theory, with rho similar to V-1/2, Lambda is observed to increase with V (partial derivative Lambda/partial derivative V > 0), in apparent disagreement withmore » classical scaling laws for primary dendritic spacing, which predict that partial derivative Lambda/partial derivative V <0. We show through simulations that this trend inversion for Lambda(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. We explain the observed inversion of partial derivative Lambda/partial derivative V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
Clarke, A. J.; Tourret, D.; Song, Y.; ...
2017-05-01
We study microstructure selection during during directional solidification of a thin metallic sample. We combine in situ X-ray radiography of a dilute Al-Cu alloy solidification experiments with three-dimensional phase-field simulations. Here we explore a range of temperature gradient G and growth velocity V and build a microstructure selection map for this alloy. We investigate the selection of the primary dendritic spacing Λ and tip radius ρ. While ρ shows a good agreement between experimental measurements and dendrite growth theory, with ρ~V $-$1/2, Λ is observed to increase with V (∂Λ/∂V > 0), in apparent disagreement with classical scaling laws formore » primary dendritic spacing, which predict that ∂Λ/∂V<0. We show through simulations that this trend inversion for Λ(V) is due to liquid convection in our experiments, despite the thin sample configuration. We use a classical diffusion boundary-layer approximation to semi-quantitatively incorporate the effect of liquid convection into phase-field simulations. This approximation is implemented by assuming complete solute mixing outside a purely diffusive zone of constant thickness that surrounds the solid-liquid interface. This simple method enables us to quantitatively match experimental measurements of the planar morphological instability threshold and primary spacings over an order of magnitude in V. Lastly, we explain the observed inversion of ∂Λ/∂V by a combination of slow transient dynamics of microstructural homogenization and the influence of the sample thickness.« less
Hamel, J F; Sebille, V; Le Neel, T; Kubis, G; Boyer, F C; Hardouin, J B
2017-12-01
Subjective health measurements using Patient Reported Outcomes (PRO) are increasingly used in randomized trials, particularly for patient groups comparisons. Two main types of analytical strategies can be used for such data: Classical Test Theory (CTT) and Item Response Theory models (IRT). These two strategies display very similar characteristics when data are complete, but in the common case when data are missing, whether IRT or CTT would be the most appropriate remains unknown and was investigated using simulations. We simulated PRO data such as quality of life data. Missing responses to items were simulated as being completely random, depending on an observable covariate or on an unobserved latent trait. The considered CTT-based methods allowed comparing scores using complete-case analysis, personal mean imputations or multiple-imputations based on a two-way procedure. The IRT-based method was the Wald test on a Rasch model including a group covariate. The IRT-based method and the multiple-imputations-based method for CTT displayed the highest observed power and were the only unbiased method whatever the kind of missing data. Online software and Stata® modules compatibles with the innate mi impute suite are provided for performing such analyses. Traditional procedures (listwise deletion and personal mean imputations) should be avoided, due to inevitable problems of biases and lack of power.
NASA Astrophysics Data System (ADS)
Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.
2012-01-01
Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.
Probability Simulations by Non-Lipschitz Chaos
NASA Technical Reports Server (NTRS)
Zak, Michail
1996-01-01
It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices. Self-organizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed.
Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe
2006-08-17
Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.
NASA Astrophysics Data System (ADS)
Sagui, Celeste; Pedersen, Lee G.; Darden, Thomas A.
2004-01-01
The accurate simulation of biologically active macromolecules faces serious limitations that originate in the treatment of electrostatics in the empirical force fields. The current use of "partial charges" is a significant source of errors, since these vary widely with different conformations. By contrast, the molecular electrostatic potential (MEP) obtained through the use of a distributed multipole moment description, has been shown to converge to the quantum MEP outside the van der Waals surface, when higher order multipoles are used. However, in spite of the considerable improvement to the representation of the electronic cloud, higher order multipoles are not part of current classical biomolecular force fields due to the excessive computational cost. In this paper we present an efficient formalism for the treatment of higher order multipoles in Cartesian tensor formalism. The Ewald "direct sum" is evaluated through a McMurchie-Davidson formalism [L. McMurchie and E. Davidson, J. Comput. Phys. 26, 218 (1978)]. The "reciprocal sum" has been implemented in three different ways: using an Ewald scheme, a particle mesh Ewald (PME) method, and a multigrid-based approach. We find that even though the use of the McMurchie-Davidson formalism considerably reduces the cost of the calculation with respect to the standard matrix implementation of multipole interactions, the calculation in direct space remains expensive. When most of the calculation is moved to reciprocal space via the PME method, the cost of a calculation where all multipolar interactions (up to hexadecapole-hexadecapole) are included is only about 8.5 times more expensive than a regular AMBER 7 [D. A. Pearlman et al., Comput. Phys. Commun. 91, 1 (1995)] implementation with only charge-charge interactions. The multigrid implementation is slower but shows very promising results for parallelization. It provides a natural way to interface with continuous, Gaussian-based electrostatics in the future. It is hoped that this new formalism will facilitate the systematic implementation of higher order multipoles in classical biomolecular force fields.
Theoretical analysis of evaporative cooling of classic heat stroke patients.
Alzeer, Abdulaziz H; Wissler, E H
2018-05-18
Heat stroke is a serious health concern globally, which is associated with high mortality. Newer treatments must be designed to improve outcomes. The aim of this study is to evaluate the effect of variations in ambient temperature and wind speed on the rate of cooling in a simulated heat stroke subject using the dynamic model of Wissler. We assume that a 60-year-old 70-kg female suffers classic heat stroke after walking fully exposed to the sun for 4 h while the ambient temperature is 40 °C, relative humidity is 20%, and wind speed is 2.5 m/s -1 . Her esophageal and skin temperatures are 41.9 and 40.7 °C at the time of collapse. Cooling is accomplished by misting with lukewarm water while exposed to forced airflow at a temperature of 20 to 40 °C and a velocity of 0.5 or 1 m/s -1 . Skin blood flow is assumed to be either normal, one-half of normal, or twice normal. At wind speed of 0.5 m/s -1 and normal skin blood flow, the air temperature decreased from 40 to 20 °C, increased cooling, and reduced time required to reach to a desired temperature of 38 °C. This relationship was also maintained in reduced blood flow states. Increasing wind speed to 1 m/s -1 increased cooling and reduced the time to reach optimal temperature both in normal and reduced skin blood flow states. In conclusion, evaporative cooling methods provide an effective method for cooling classic heat stroke patients. The maximum heat dissipation from the simulated model of Wissler was recorded when the entire body was misted with lukewarm water and applied forced air at 1 m/s at temperature of 20 °C.