Sample records for path integral monte

  1. Path integral Monte Carlo ground state approach: formalism, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2017-11-01

    Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.

  2. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  3. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  4. High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2015-03-01

    In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.

  5. Path integral pricing of Wasabi option in the Black-Scholes model

    NASA Astrophysics Data System (ADS)

    Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada

    2014-11-01

    In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.

  6. Data assimilation using a GPU accelerated path integral Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Quinn, John C.; Abarbanel, Henry D. I.

    2011-09-01

    The answers to data assimilation questions can be expressed as path integrals over all possible state and parameter histories. We show how these path integrals can be evaluated numerically using a Markov Chain Monte Carlo method designed to run in parallel on a graphics processing unit (GPU). We demonstrate the application of the method to an example with a transmembrane voltage time series of a simulated neuron as an input, and using a Hodgkin-Huxley neuron model. By taking advantage of GPU computing, we gain a parallel speedup factor of up to about 300, compared to an equivalent serial computation on a CPU, with performance increasing as the length of the observation time used for data assimilation increases.

  7. Evaluation of the path integral for flow through random porous media

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; Coche, Gil-Arnaud; King, Peter R.; Vvedensky, Dimitri D.

    2018-04-01

    We present a path integral formulation of Darcy's equation in one dimension with random permeability described by a correlated multivariate lognormal distribution. This path integral is evaluated with the Markov chain Monte Carlo method to obtain pressure distributions, which are shown to agree with the solutions of the corresponding stochastic differential equation for Dirichlet and Neumann boundary conditions. The extension of our approach to flow through random media in two and three dimensions is discussed.

  8. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  9. Elastic constants of hcp 4He: Path-integral Monte Carlo results versus experiment

    NASA Astrophysics Data System (ADS)

    Ardila, Luis Aldemar Peña; Vitiello, Silvio A.; de Koning, Maurice

    2011-09-01

    The elastic constants of hcp 4He are computed using the path-integral Monte Carlo (PIMC) method. The stiffness coefficients are obtained by imposing different distortions to a periodic cell containing 180 atoms, followed by measurement of the elements of the corresponding stress tensor. For this purpose an appropriate path-integral expression for the stress tensor observable is derived and implemented into the pimc++ package. In addition to allowing the determination of the elastic stiffness constants, this development also opens the way to an explicit atomistic determination of the Peierls stress for dislocation motion using the PIMC technique. A comparison of the results to available experimental data shows an overall good agreement of the density dependence of the elastic constants, with the single exception of C13. Additional calculations for the bcc phase, on the other hand, show good agreement for all elastic constants.

  10. Accurate Exchange-Correlation Energies for the Warm Dense Electron Gas.

    PubMed

    Malone, Fionn D; Blunt, N S; Brown, Ethan W; Lee, D K K; Spencer, J S; Foulkes, W M C; Shepherd, James J

    2016-09-09

    The density matrix quantum Monte Carlo (DMQMC) method is used to sample exact-on-average N-body density matrices for uniform electron gas systems of up to 10^{124} matrix elements via a stochastic solution of the Bloch equation. The results of these calculations resolve a current debate over the accuracy of the data used to parametrize finite-temperature density functionals. Exchange-correlation energies calculated using the real-space restricted path-integral formalism and the k-space configuration path-integral formalism disagree by up to ∼10% at certain reduced temperatures T/T_{F}≤0.5 and densities r_{s}≤1. Our calculations confirm the accuracy of the configuration path-integral Monte Carlo results available at high density and bridge the gap to lower densities, providing trustworthy data in the regime typical of planetary interiors and solids subject to laser irradiation. We demonstrate that the DMQMC method can calculate free energies directly and present exact free energies for T/T_{F}≥1 and r_{s}≤2.

  11. Variational path integral molecular dynamics and hybrid Monte Carlo algorithms using a fourth order propagator with applications to molecular systems

    NASA Astrophysics Data System (ADS)

    Kamibayashi, Yuki; Miura, Shinichi

    2016-08-01

    In the present study, variational path integral molecular dynamics and associated hybrid Monte Carlo (HMC) methods have been developed on the basis of a fourth order approximation of a density operator. To reveal various parameter dependence of physical quantities, we analytically solve one dimensional harmonic oscillators by the variational path integral; as a byproduct, we obtain the analytical expression of the discretized density matrix using the fourth order approximation for the oscillators. Then, we apply our methods to realistic systems like a water molecule and a para-hydrogen cluster. In the HMC, we adopt two level description to avoid the time consuming Hessian evaluation. For the systems examined in this paper, the HMC method is found to be about three times more efficient than the molecular dynamics method if appropriate HMC parameters are adopted; the advantage of the HMC method is suggested to be more evident for systems described by many body interaction.

  12. Path-integral Monte Carlo method for Rényi entanglement entropies.

    PubMed

    Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A

    2014-07-01

    We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.

  13. Quantum Gibbs ensemble Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fantoni, Riccardo, E-mail: rfantoni@ts.infn.it; Moroni, Saverio, E-mail: moroni@democritos.it

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of {sup 4}He in two dimensions.

  14. On processed splitting methods and high-order actions in path-integral Monte Carlo simulations.

    PubMed

    Casas, Fernando

    2010-10-21

    Processed splitting methods are particularly well adapted to carry out path-integral Monte Carlo (PIMC) simulations: since one is mainly interested in estimating traces of operators, only the kernel of the method is necessary to approximate the thermal density matrix. Unfortunately, they suffer the same drawback as standard, nonprocessed integrators: kernels of effective order greater than two necessarily involve some negative coefficients. This problem can be circumvented, however, by incorporating modified potentials into the composition, thus rendering schemes of higher effective order. In this work we analyze a family of fourth-order schemes recently proposed in the PIMC setting, paying special attention to their linear stability properties, and justify their observed behavior in practice. We also propose a new fourth-order scheme requiring the same computational cost but with an enlarged stability interval.

  15. Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-10

    Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less

  16. Combination of the pair density approximation and the Takahashi–Imada approximation for path integral Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zillich, Robert E., E-mail: robert.zillich@jku.at

    2015-11-15

    We construct an accurate imaginary time propagator for path integral Monte Carlo simulations for heterogeneous systems consisting of a mixture of atoms and molecules. We combine the pair density approximation, which is highly accurate but feasible only for the isotropic interactions between atoms, with the Takahashi–Imada approximation for general interactions. We present finite temperature simulations results for energy and structure of molecules–helium clusters X{sup 4}He{sub 20} (X=HCCH and LiH) which show a marked improvement over the Trotter approximation which has a 2nd-order time step bias. We show that the 4th-order corrections of the Takahashi–Imada approximation can also be applied perturbativelymore » to a 2nd-order simulation.« less

  17. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  18. Path-integral and Ornstein-Zernike study of quantum fluid structures on the crystallization line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sesé, Luis M., E-mail: msese@ccia.uned.es

    2016-03-07

    Liquid neon, liquid para-hydrogen, and the quantum hard-sphere fluid are studied with path integral Monte Carlo simulations and the Ornstein-Zernike pair equation on their respective crystallization lines. The results cover the whole sets of structures in the r-space and the k-space and, for completeness, the internal energies, pressures and isothermal compressibilities. Comparison with experiment is made wherever possible, and the possibilities of establishing k-space criteria for quantum crystallization based on the path-integral centroids are discussed. In this regard, the results show that the centroid structure factor contains two significant parameters related to its main peak features (amplitude and shape) thatmore » can be useful to characterize freezing.« less

  19. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  20. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  1. Excitonic effects in two-dimensional semiconductors: Path integral Monte Carlo approach

    DOE PAGES

    Velizhanin, Kirill A.; Saxena, Avadh

    2015-11-01

    The most striking features of novel two-dimensional semiconductors (e.g., transition metal dichalcogenide monolayers or phosphorene) is a strong Coulomb interaction between charge carriers resulting in large excitonic effects. In particular, this leads to the formation of multicarrier bound states upon photoexcitation (e.g., excitons, trions, and biexcitons), which could remain stable at near-room temperatures and contribute significantly to the optical properties of such materials. In our work we have used the path integral Monte Carlo methodology to numerically study properties of multicarrier bound states in two-dimensional semiconductors. Specifically, we have accurately investigated and tabulated the dependence of single-exciton, trion, and biexcitonmore » binding energies on the strength of dielectric screening, including the limiting cases of very strong and very weak screening. Our results of this work are potentially useful in the analysis of experimental data and benchmarking of theoretical and computational models.« less

  2. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-06-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b4 of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4 , our b4 agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  3. Path integral approach to closed-form option pricing formulas with applications to stochastic volatility and interest rate models

    NASA Astrophysics Data System (ADS)

    Lemmens, D.; Wouters, M.; Tempere, J.; Foulon, S.

    2008-07-01

    We present a path integral method to derive closed-form solutions for option prices in a stochastic volatility model. The method is explained in detail for the pricing of a plain vanilla option. The flexibility of our approach is demonstrated by extending the realm of closed-form option price formulas to the case where both the volatility and interest rates are stochastic. This flexibility is promising for the treatment of exotic options. Our analytical formulas are tested with numerical Monte Carlo simulations.

  4. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  5. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  6. Kinetic isotope effect in malonaldehyde determined from path integral Monte Carlo simulations.

    PubMed

    Huang, Jing; Buchowiecki, Marcin; Nagy, Tibor; Vaníček, Jiří; Meuwly, Markus

    2014-01-07

    The primary H/D kinetic isotope effect on the intramolecular proton transfer in malonaldehyde is determined from quantum instanton path integral Monte Carlo simulations on a fully dimensional and validated potential energy surface for temperatures between 250 and 1500 K. Our calculations, based on thermodynamic integration with respect to the mass of the transferring particle, are significantly accelerated by the direct evaluation of the kinetic isotope effect instead of computing it as a ratio of two rate constants. At room temperature, the KIE from the present simulations is 5.2 ± 0.4. The KIE is found to vary considerably as a function of temperature and the low-T behaviour is dominated by the fact that the free energy derivative in the reactant state increases more rapidly than in the transition state. Detailed analysis of the various contributions to the quantum rate constant together with estimates for rates from conventional transition state theory and from periodic orbit theory suggest that the KIE in malonaldehyde is dominated by zero point energy effects and that tunneling plays a minor role at room temperature.

  7. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  8. MONTE: the next generation of mission design and navigation software

    NASA Astrophysics Data System (ADS)

    Evans, Scott; Taber, William; Drain, Theodore; Smith, Jonathon; Wu, Hsi-Cheng; Guevara, Michelle; Sunseri, Richard; Evans, James

    2018-03-01

    The Mission analysis, Operations and Navigation Toolkit Environment (MONTE) (Sunseri et al. in NASA Tech Briefs 36(9), 2012) is an astrodynamic toolkit produced by the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory. It provides a single integrated environment for all phases of deep space and Earth orbiting missions. Capabilities include: trajectory optimization and analysis, operational orbit determination, flight path control, and 2D/3D visualization. MONTE is presented to the user as an importable Python language module. This allows a simple but powerful user interface via CLUI or script. In addition, the Python interface allows MONTE to be used seamlessly with other canonical scientific programming tools such as SciPy, NumPy, and Matplotlib. MONTE is the prime operational orbit determination software for all JPL navigated missions.

  9. Path integral Monte Carlo determination of the fourth-order virial coefficient for unitary two-component Fermi gas with zero-range interactions

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2016-05-01

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astro physics. This work determines the fourth-order virial coefficient b4 of such a strongly-interacting Fermi gas using a customized ab inito path integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b4, our b4 agrees with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly anti-symmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions. We gratefully acknowledge support by the NSF.

  10. Path-Integral Monte Carlo Determination of the Fourth-Order Virial Coefficient for a Unitary Two-Component Fermi Gas with Zero-Range Interactions.

    PubMed

    Yan, Yangqian; Blume, D

    2016-06-10

    The unitary equal-mass Fermi gas with zero-range interactions constitutes a paradigmatic model system that is relevant to atomic, condensed matter, nuclear, particle, and astrophysics. This work determines the fourth-order virial coefficient b_{4} of such a strongly interacting Fermi gas using a customized ab initio path-integral Monte Carlo (PIMC) algorithm. In contrast to earlier theoretical results, which disagreed on the sign and magnitude of b_{4}, our b_{4} agrees within error bars with the experimentally determined value, thereby resolving an ongoing literature debate. Utilizing a trap regulator, our PIMC approach determines the fourth-order virial coefficient by directly sampling the partition function. An on-the-fly antisymmetrization avoids the Thomas collapse and, combined with the use of the exact two-body zero-range propagator, establishes an efficient general means to treat small Fermi systems with zero-range interactions.

  11. Quantum annealing of the traveling-salesman problem.

    PubMed

    Martonák, Roman; Santoro, Giuseppe E; Tosatti, Erio

    2004-11-01

    We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

  12. NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems

    NASA Astrophysics Data System (ADS)

    Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek

    2015-03-01

    The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.

  13. Elucidating the electron transport in semiconductors via Monte Carlo simulations: an inquiry-driven learning path for engineering undergraduates

    NASA Astrophysics Data System (ADS)

    Persano Adorno, Dominique; Pizzolato, Nicola; Fazio, Claudio

    2015-09-01

    Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations.

  14. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method

    NASA Astrophysics Data System (ADS)

    Miura, Shinichi

    2018-03-01

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  15. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method.

    PubMed

    Miura, Shinichi

    2018-03-14

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  16. Cubature on Wiener Space: Pathwise Convergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayer, Christian, E-mail: christian.bayer@wias-berlin.de; Friz, Peter K., E-mail: friz@math.tu-berlin.de

    2013-04-15

    Cubature on Wiener space (Lyons and Victoir in Proc. R. Soc. Lond. A 460(2041):169-198, 2004) provides a powerful alternative to Monte Carlo simulation for the integration of certain functionals on Wiener space. More specifically, and in the language of mathematical finance, cubature allows for fast computation of European option prices in generic diffusion models.We give a random walk interpretation of cubature and similar (e.g. the Ninomiya-Victoir) weak approximation schemes. By using rough path analysis, we are able to establish weak convergence for general path-dependent option prices.

  17. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  18. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    Here, we present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to themore » proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. Lastly, while these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.« less

  19. Worldline approach for numerical computation of electromagnetic Casimir energies: Scalar field coupled to magnetodielectric media

    DOE PAGES

    Mackrory, Jonathan B.; Bhattacharya, Tanmoy; Steck, Daniel A.

    2016-10-12

    Here, we present a worldline method for the calculation of Casimir energies for scalar fields coupled to magnetodielectric media. The scalar model we consider may be applied in arbitrary geometries, and it corresponds exactly to one polarization of the electromagnetic field in planar layered media. Starting from the field theory for electromagnetism, we work with the two decoupled polarizations in planar media and develop worldline path integrals, which represent the two polarizations separately, for computing both Casimir and Casimir-Polder potentials. We then show analytically that the path integrals for the transverse-electric polarization coupled to a dielectric medium converge to themore » proper solutions in certain special cases, including the Casimir-Polder potential of an atom near a planar interface, and the Casimir energy due to two planar interfaces. We also evaluate the path integrals numerically via Monte Carlo path-averaging for these cases, studying the convergence and performance of the resulting computational techniques. Lastly, while these scalar methods are only exact in particular geometries, they may serve as an approximation for Casimir energies for the vector electromagnetic field in other geometries.« less

  20. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  1. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  2. On static triplet structures in fluids with quantum behavior.

    PubMed

    Sesé, Luis M

    2018-03-14

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H 2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H 2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  3. On static triplet structures in fluids with quantum behavior

    NASA Astrophysics Data System (ADS)

    Sesé, Luis M.

    2018-03-01

    The problem of the equilibrium triplet structures in fluids with quantum behavior is discussed. Theoretical questions of interest to the real space structures are addressed by studying the three types of structures that can be determined via path integrals (instantaneous, centroid, and total thermalized-continuous linear response). The cases of liquid para-H2 and liquid neon on their crystallization lines are examined with path-integral Monte Carlo simulations, the focus being on the instantaneous and the centroid triplet functions (equilateral and isosceles configurations). To analyze the results further, two standard closures, Kirkwood superposition and Jackson-Feenberg convolution, are utilized. In addition, some pilot calculations with path integrals and closures of the instantaneous triplet structure factor of liquid para-H2 are also carried out for the equilateral components. Triplet structural regularities connected to the pair radial structures are identified, a remarkable usefulness of the closures employed is observed (e.g., triplet spatial functions for medium-long distances, triplet structure factors for medium k wave numbers), and physical insight into the role of pair correlations near quantum crystallization is gained.

  4. Photoabsorption spectra of small HeN+ clusters (N = 3, 4, 10). A quantum Monte Carlo modeling

    NASA Astrophysics Data System (ADS)

    Ćosić, Rajko; Karlický, František; Kalus, René

    2018-05-01

    Photoabsorption cross-sections have been calculated for HeN+ clusters of selected sizes (N = 3, 4, 10) over a broad range of photon energies (Ephot = 2 - 14 eV) and compared with available experimental data. Semiempirical electronic Hamiltonians derived from the diatomics-in-molecules approach have been used for electronic structure calculations and a quantum, path-integral Monte Carlo method has been employed to model the delocalization of helium nuclei. While a quantitative agreement has been achieved between the theory and experiment for He3+ and He4+, only qualitative correspondence is seen for He10+ .

  5. Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut

    2017-01-01

    We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .

  6. A theoretical framework to predict the most likely ion path in particle imaging.

    PubMed

    Collins-Fekete, Charles-Antoine; Volz, Lennart; Portillo, Stephen K N; Beaulieu, Luc; Seco, Joao

    2017-03-07

    In this work, a generic rigorous Bayesian formalism is introduced to predict the most likely path of any ion crossing a medium between two detection points. The path is predicted based on a combination of the particle scattering in the material and measurements of its initial and final position, direction and energy. The path estimate's precision is compared to the Monte Carlo simulated path. Every ion from hydrogen to carbon is simulated in two scenarios, (1) where the range is fixed and (2) where the initial velocity is fixed. In the scenario where the range is kept constant, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.50 mm) and the helium path estimate (0.18 mm), but less so up to the carbon path estimate (0.09 mm). However, this scenario is identified as the configuration that maximizes the dose while minimizing the path resolution. In the scenario where the initial velocity is fixed, the maximal root-mean-square error between the estimated path and the Monte Carlo path drops significantly between the proton path estimate (0.29 mm) and the helium path estimate (0.09 mm) but increases for heavier ions up to carbon (0.12 mm). As a result, helium is found to be the particle with the most accurate path estimate for the lowest dose, potentially leading to tomographic images of higher spatial resolution.

  7. Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume

    2016-03-01

    Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.

  8. Fast orthogonal transforms and generation of Brownian paths

    PubMed Central

    Leobacher, Gunther

    2012-01-01

    We present a number of fast constructions of discrete Brownian paths that can be used as alternatives to principal component analysis and Brownian bridge for stratified Monte Carlo and quasi-Monte Carlo. By fast we mean that a path of length n can be generated in O(nlog(n)) floating point operations. We highlight some of the connections between the different constructions and we provide some numerical examples. PMID:23471545

  9. Low-frequency pulse propagation over 510 km in the Philippine Sea: A comparison of observed and theoretical pulse spreading.

    PubMed

    Andrew, Rex K; Ganse, Andrew; White, Andrew W; Mercer, James A; Dzieciuch, Matthew A; Worcester, Peter F; Colosi, John A

    2016-07-01

    Observations of the spread of wander-corrected averaged pulses propagated over 510 km for 54 h in the Philippine Sea are compared to Monte Carlo predictions using a parabolic equation and path-integral predictions. Two simultaneous m-sequence signals are used, one centered at 200 Hz, the other at 300 Hz; both have a bandwidth of 50 Hz. The internal wave field is estimated at slightly less than unity Garrett-Munk strength. The observed spreads in all the early ray-like arrivals are very small, <1 ms (for pulse widths of 17 and 14 ms), which are on the order of the sampling period. Monte Carlo predictions show similar very small spreads. Pulse spread is one consequence of scattering, which is assumed to occur primarily at upper ocean depths where scattering processes are strongest and upward propagating rays refract downward. If scattering effects in early ray-like arrivals accumulate with increasing upper turning points, spread might show a similar dependence. Real and simulation results show no such dependence. Path-integral theory prediction of spread is accurate for the earliest ray-like arrivals, but appears to be increasingly biased high for later ray-like arrivals, which have more upper turning points.

  10. Path integral Monte Carlo study on the structure and absorption spectra of alkali atoms (Li, Na, K) attached to superfluid helium clusters

    NASA Astrophysics Data System (ADS)

    Nakayama, Akira; Yamashita, Koichi

    2001-01-01

    Path integral Monte Carlo calculations have been performed to investigate the microscopic structure and thermodynamic properties of the AkṡHeN (Ak=Li, Na, K,N⩽300) clusters at T=0.5 K. Absorption spectra which correspond to the 2P←2S transitions of alkali atoms are also calculated within a pairwise additive model, which employs diatomic Ak-He potential energy curves. The size dependences of the cluster structure and absorption spectra that show the influence of the helium cluster environment are examined in detail. It is found that alkali atoms are trapped in a dimple on the helium cluster's surface and that, from the asymptotic behavior, the AkṡHe300 cluster, at least semiquantitatively, mimics the local structure of experimentally produced large helium clusters in the vicinity of alkali atoms. We have successfully reproduced the overall shapes of the spectra and explained their features from a static and structural point of view. The positions, relative intensities, and line widths of the absorption maxima are calculated to be in moderate agreement with experiments [F. Stienkemeier, J. Higgins, C. Callegari, S. I. Kanorsky, W. E. Ernst, and G. Scoles, Z. Phys. D 38, 253 (1996)].

  11. Discrete Diffusion Monte Carlo for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Wollaeger, Ryan; Moses, Gregory

    2014-10-01

    The iSNB (implicit Schurtz Nicolai Busquet electron thermal transport method of Cao et al. is adapted to a Discrete Diffusion Monte Carlo (DDMC) solution method for eventual inclusion in a hybrid IMC-DDMC (Implicit Monte Carlo) method. The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the iSNB-DDMC method will be presented. This work was supported by Sandia National Laboratory - Albuquerque.

  12. Path integral molecular dynamic simulation of flexible molecular systems in their ground state: Application to the water dimer

    NASA Astrophysics Data System (ADS)

    Schmidt, Matthew; Roy, Pierre-Nicholas

    2018-03-01

    We extend the Langevin equation Path Integral Ground State (LePIGS), a ground state quantum molecular dynamics method, to simulate flexible molecular systems and calculate both energetic and structural properties. We test the approach with the H2O and D2O monomers and dimers. We systematically optimize all simulation parameters and use a unity trial wavefunction. We report ground state energies, dissociation energies, and structural properties using three different water models, two of which are empirically based, q-TIP4P/F and q-SPC/Fw, and one which is ab initio, MB-pol. We demonstrate that our energies calculated from LePIGS can be merged seamlessly with low temperature path integral molecular dynamics calculations and note the similarities between the two methods. We also benchmark our energies against previous diffusion Monte Carlo calculations using the same potentials and compare to experimental results. We further demonstrate that accurate vibrational energies of the H2O and D2O monomer can be calculated from imaginary time correlation functions generated from the LePIGS simulations using solely the unity trial wavefunction.

  13. Monte Carlo Transport for Electron Thermal Transport

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  14. Crystal structure of solid molecular hydrogen under high pressures

    NASA Astrophysics Data System (ADS)

    Cui, T.; Ma, Y.; Zou, G.

    2002-11-01

    In an effort to achieve a comprehensive understanding of the structure of dense H2, we have performed path-integral Monte Carlo simulations for three combinations of pressures and temperatures corresponding to three phases of solid hydrogen. Our results suggest three kinds of distribution of molecules: orientationally disordered hexagonal close packed (hcp), orientationally ordered hcp with Pa3-type local orientation order and orientationally ordered orthorhombic structure of Cmca symmetry, for the three phases.

  15. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.

    PubMed

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M

    2016-07-14

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  16. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis

    NASA Astrophysics Data System (ADS)

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.

    2016-07-01

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  17. Harmonic-phase path-integral approximation of thermal quantum correlation functions

    NASA Astrophysics Data System (ADS)

    Robertson, Christopher; Habershon, Scott

    2018-03-01

    We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.

  18. Review of computer simulations of isotope effects on biochemical reactions: From the Bigeleisen equation to Feynman's path integral.

    PubMed

    Wong, Kin-Yiu; Xu, Yuqing; Xu, Liang

    2015-11-01

    Enzymatic reactions are integral components in many biological functions and malfunctions. The iconic structure of each reaction path for elucidating the reaction mechanism in details is the molecular structure of the rate-limiting transition state (RLTS). But RLTS is very hard to get caught or to get visualized by experimentalists. In spite of the lack of explicit molecular structure of the RLTS in experiment, we still can trace out the RLTS unique "fingerprints" by measuring the isotope effects on the reaction rate. This set of "fingerprints" is considered as a most direct probe of RLTS. By contrast, for computer simulations, oftentimes molecular structures of a number of TS can be precisely visualized on computer screen, however, theoreticians are not sure which TS is the actual rate-limiting one. As a result, this is an excellent stage setting for a perfect "marriage" between experiment and theory for determining the structure of RLTS, along with the reaction mechanism, i.e., experimentalists are responsible for "fingerprinting", whereas theoreticians are responsible for providing candidates that match the "fingerprints". In this Review, the origin of isotope effects on a chemical reaction is discussed from the perspectives of classical and quantum worlds, respectively (e.g., the origins of the inverse kinetic isotope effects and all the equilibrium isotope effects are purely from quantum). The conventional Bigeleisen equation for isotope effect calculations, as well as its refined version in the framework of Feynman's path integral and Kleinert's variational perturbation (KP) theory for systematically incorporating anharmonicity and (non-parabolic) quantum tunneling, are also presented. In addition, the outstanding interplay between theory and experiment for successfully deducing the RLTS structures and the reaction mechanisms is demonstrated by applications on biochemical reactions, namely models of bacterial squalene-to-hopene polycyclization and RNA 2'-O-transphosphorylation. For all these applications, we used our recently-developed path-integral method based on the KP theory, called automated integration-free path-integral (AIF-PI) method, to perform ab initio path-integral calculations of isotope effects. As opposed to the conventional path-integral molecular dynamics (PIMD) and Monte Carlo (PIMC) simulations, values calculated from our AIF-PI path-integral method can be as precise as (not as accurate as) the numerical precision of the computing machine. Lastly, comments are made on the general challenges in theoretical modeling of candidates matching the experimental "fingerprints" of RLTS. This article is part of a Special Issue entitled: Enzyme Transition States from Theory and Experiment. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Quantum dynamics at finite temperature: Time-dependent quantum Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christov, Ivan P., E-mail: ivan.christov@phys.uni-sofia.bg

    2016-08-15

    In this work we investigate the ground state and the dissipative quantum dynamics of interacting charged particles in an external potential at finite temperature. The recently devised time-dependent quantum Monte Carlo (TDQMC) method allows a self-consistent treatment of the system of particles together with bath oscillators first for imaginary-time propagation of Schrödinger type of equations where both the system and the bath converge to their finite temperature ground state, and next for real time calculation where the dissipative dynamics is demonstrated. In that context the application of TDQMC appears as promising alternative to the path-integral related techniques where the realmore » time propagation can be a challenge.« less

  20. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  1. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Matthew; Constable, Steve; Ing, Christopher

    2014-06-21

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen formore » cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.« less

  2. Impurity self-energy in the strongly-correlated Bose systems

    NASA Astrophysics Data System (ADS)

    Panochko, Galyna; Pastukhov, Volodymyr; Vakarchuk, Ivan

    2018-02-01

    We proposed the nonperturbative scheme for the calculation of the impurity spectrum in the Bose system at zero temperature. The method is based on the path-integral formulation and describes an impurity as a zero-density ideal Fermi gas interacting with Bose system for which the action is written in terms of density fluctuations. On the example of the 3He atom immersed in the liquid helium-4 a good consistency with experimental data and results of Monte Carlo simulations is shown.

  3. Interacting lattice systems with quantum dissipation: A quantum Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Yan, Zheng; Pollet, Lode; Lou, Jie; Wang, Xiaoqun; Chen, Yan; Cai, Zi

    2018-01-01

    Quantum dissipation arises when a large system can be split in a quantum system and an environment to which the energy of the former flows. Understanding the effect of dissipation on quantum many-body systems is of particular importance due to its potential relationship with quantum information. We propose a conceptually simple approach to introduce dissipation into interacting quantum systems in a thermodynamical context, in which every site of a one-dimensional (1D) lattice is coupled off-diagonally to its own bath. The interplay between quantum dissipation and interactions gives rise to counterintuitive interpretations such as a compressible zero-temperature state with spontaneous discrete symmetry breaking and a thermal phase transition in a 1D dissipative quantum many-body system as revealed by quantum Monte Carlo path-integral simulations.

  4. Thermal helium clusters at 3.2 Kelvin in classical and semiclassical simulations

    NASA Astrophysics Data System (ADS)

    Schulte, J.

    1993-03-01

    The thermodynamic stability of4He4-13 at 3.2 K is investigated with the classical Monte Carlo method, with the semiclassical path-integral Monte Carlo (PIMC) method, and with the semiclassical all-order many-body method. In the all-order many-body simulation the dipole-dipole approximation including short-range correction is used. The resulting stability plots are discussed and related to recent TOF experiments by Stephens and King. It is found that with classical Monte Carlo of course the characteristics of the measured mass spectrum cannot be resolved. With PIMC, switching on more and more quantum mechanics. by raising the number of virtual time steps results in more structure in the stability plot, but this did not lead to sufficient agreement with the TOF experiment. Only the all-order many-body method resolved the characteristic structures of the measured mass spectrum, including magic numbers. The result shows the influence of quantum statistics and quantum mechanics on the stability of small neutral helium clusters.

  5. Monte Carlo simulation for coherent backscattering with diverging illumination (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Wu, Wenli; Radosevich, Andrew J.; Eshein, Adam; Nguyen, The-Quyen; Backman, Vadim

    2016-03-01

    Diverging beam illumination is widely used in many optical techniques especially in fiber optic applications and coherence phenomenon is one of the most important properties to consider for these applications. Until now, people have used Monte Carlo simulations to study the backscattering coherence phenomenon in collimated beam illumination only. We are the first one to study the coherence phenomenon under the exact diverging beam geometry by taking into account the impossibility of the existence for the exact time-reversed path pairs of photons, which is the main contribution to the backscattering coherence pattern in collimated beam. In this work, we present a Monte Carlo simulation that considers the influence of the illumination numerical aperture. The simulation tracks the electric field for the unique paths of forward path and reverse path in time-reversed pairs of photons as well as the same path shared by them. With this approach, we can model the coherence pattern formed between the pairs by considering their phase difference at the collection plane directly. To validate this model, we use the Low-coherence Enhanced Backscattering Spectroscopy, one of the instruments looking at the coherence pattern using diverging beam illumination, as the benchmark to compare with. In the end, we show how this diverging configuration would significantly change the coherent pattern under coherent light source and incoherent light source. This Monte Carlo model we developed can be used to study the backscattering phenomenon in both coherence and non-coherence situation with both collimated beam and diverging beam setups.

  6. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  7. Effects of dynamical paths on the energy gap and the corrections to the free energy in path integrals of mean-field quantum spin systems

    NASA Astrophysics Data System (ADS)

    Koh, Yang Wei

    2018-03-01

    In current studies of mean-field quantum spin systems, much attention is placed on the calculation of the ground-state energy and the excitation gap, especially the latter, which plays an important role in quantum annealing. In pure systems, the finite gap can be obtained by various existing methods such as the Holstein-Primakoff transform, while the tunneling splitting at first-order phase transitions has also been studied in detail using instantons in many previous works. In disordered systems, however, it remains challenging to compute the gap of large-size systems with specific realization of disorder. Hitherto, only quantum Monte Carlo techniques are practical for such studies. Recently, Knysh [Nature Comm. 7, 12370 (2016), 10.1038/ncomms12370] proposed a method where the exponentially large dimensionality of such systems is condensed onto a random potential of much lower dimension, enabling efficient study of such systems. Here we propose a slightly different approach, building upon the method of static approximation of the partition function widely used for analyzing mean-field models. Quantum effects giving rise to the excitation gap and nonextensive corrections to the free energy are accounted for by incorporating dynamical paths into the path integral. The time-dependence of the trace of the time-ordered exponential of the effective Hamiltonian is calculated by solving a differential equation perturbatively, yielding a finite-size series expansion of the path integral. Formulae for the first excited-state energy are proposed to aid in computing the gap. We illustrate our approach using the infinite-range ferromagnetic Ising model and the Hopfield model, both in the presence of a transverse field.

  8. Path-integral simulation of solids.

    PubMed

    Herrero, C P; Ramírez, R

    2014-06-11

    The path-integral formulation of the statistical mechanics of quantum many-body systems is described, with the purpose of introducing practical techniques for the simulation of solids. Monte Carlo and molecular dynamics methods for distinguishable quantum particles are presented, with particular attention to the isothermal-isobaric ensemble. Applications of these computational techniques to different types of solids are reviewed, including noble-gas solids (helium and heavier elements), group-IV materials (diamond and elemental semiconductors), and molecular solids (with emphasis on hydrogen and ice). Structural, vibrational, and thermodynamic properties of these materials are discussed. Applications also include point defects in solids (structure and diffusion), as well as nuclear quantum effects in solid surfaces and adsorbates. Different phenomena are discussed, as solid-to-solid and orientational phase transitions, rates of quantum processes, classical-to-quantum crossover, and various finite-temperature anharmonic effects (thermal expansion, isotopic effects, electron-phonon interactions). Nuclear quantum effects are most remarkable in the presence of light atoms, so that especial emphasis is laid on solids containing hydrogen as a constituent element or as an impurity.

  9. Stochastic Analysis of Orbital Lifetimes of Spacecraft

    NASA Technical Reports Server (NTRS)

    Sasamoto, Washito; Goodliff, Kandyce; Cornelius, David

    2008-01-01

    A document discusses (1) a Monte-Carlo-based methodology for probabilistic prediction and analysis of orbital lifetimes of spacecraft and (2) Orbital Lifetime Monte Carlo (OLMC)--a Fortran computer program, consisting of a previously developed long-term orbit-propagator integrated with a Monte Carlo engine. OLMC enables modeling of variances of key physical parameters that affect orbital lifetimes through the use of probability distributions. These parameters include altitude, speed, and flight-path angle at insertion into orbit; solar flux; and launch delays. The products of OLMC are predicted lifetimes (durations above specified minimum altitudes) for the number of user-specified cases. Histograms generated from such predictions can be used to determine the probabilities that spacecraft will satisfy lifetime requirements. The document discusses uncertainties that affect modeling of orbital lifetimes. Issues of repeatability, smoothness of distributions, and code run time are considered for the purpose of establishing values of code-specific parameters and number of Monte Carlo runs. Results from test cases are interpreted as demonstrating that solar-flux predictions are primary sources of variations in predicted lifetimes. Therefore, it is concluded, multiple sets of predictions should be utilized to fully characterize the lifetime range of a spacecraft.

  10. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  11. An analysis on the theory of pulse oximetry by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Fan, Shangchun; Cai, Rui; Xing, Weiwei; Liu, Changting; Chen, Guangfei; Wang, Junfeng

    2008-10-01

    The pulse oximetry is a kind of electronic instrument that measures the oxygen saturation of arterial blood and pulse rate by non-invasive techniques. It enables prompt recognition of hypoxemia. In a conventional transmittance type pulse oximeter, the absorption of light by oxygenated and reduced hemoglobin is measured at two wavelength 660nm and 940nm. But the accuracy and measuring range of the pulse oximeter can not meet the requirement of clinical application. There are limitations in the theory of pulse oximetry, which is proved by Monte Carlo method. The mean paths are calculated in the Monte Carlo simulation. The results prove that the mean paths are not the same between the different wavelengths.

  12. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired approaches.

  13. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodríguez-Cantano, Rocío; Pérez de Tudela, Ricardo; Bartolomei, Massimiliano

    Coronene-doped helium clusters have been studied by means of classical and quantum mechanical (QM) methods using a recently developed He–C{sub 24}H{sub 12} global potential based on the use of optimized atom-bond improved Lennard-Jones functions. Equilibrium energies and geometries at global and local minima for systems with up to 69 He atoms were calculated by means of an evolutive algorithm and a basin-hopping approach and compared with results from path integral Monte Carlo (PIMC) calculations at 2 K. A detailed analysis performed for the smallest sizes shows that the precise localization of the He atoms forming the first solvation layer overmore » the molecular substrate is affected by differences between relative potential minima. The comparison of the PIMC results with the predictions from the classical approaches and with diffusion Monte Carlo results allows to examine the importance of both the QM and thermal effects.« less

  15. Theoretical Studies of Liquid He-4 Near the Superfluid Transition

    NASA Technical Reports Server (NTRS)

    Manousakis, Efstratios

    2002-01-01

    We performed theoretical studies of liquid helium by applying state of the art simulation and finite-size scaling techniques. We calculated universal scaling functions for the specific heat and superfluid density for various confining geometries relevant for experiments such as the confined helium experiment and other ground based studies. We also studied microscopically how the substrate imposes a boundary condition on the superfluid order parameter as the superfluid film grows layer by layer. Using path-integral Monte Carlo, a quantum Monte Carlo simulation method, we investigated the rich phase diagram of helium monolayer, bilayer and multilayer on a substrate such as graphite. We find excellent agreement with the experimental results using no free parameters. Finally, we carried out preliminary calculations of transport coefficients such as the thermal conductivity for bulk or confined helium systems and of their scaling properties. All our studies provide theoretical support for various experimental studies in microgravity.

  16. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  17. Edgeworth expansions of stochastic trading time

    NASA Astrophysics Data System (ADS)

    Decamps, Marc; De Schepper, Ann

    2010-08-01

    Under most local and stochastic volatility models the underlying forward is assumed to be a positive function of a time-changed Brownian motion. It relates nicely the implied volatility smile to the so-called activity rate in the market. Following Young and DeWitt-Morette (1986) [8], we propose to apply the Duru-Kleinert process-cum-time transformation in path integral to formulate the transition density of the forward. The method leads to asymptotic expansions of the transition density around a Gaussian kernel corresponding to the average activity in the market conditional on the forward value. The approximation is numerically illustrated for pricing vanilla options under the CEV model and the popular normal SABR model. The asymptotics can also be used for Monte Carlo simulations or backward integration schemes.

  18. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  19. Parameter Estimation and Model Validation of Nonlinear Dynamical Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abarbanel, Henry; Gill, Philip

    In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.

  20. Sodium dopants in helium clusters: Structure, equilibrium and submersion kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calvo, F.

    Alkali impurities bind to helium nanodroplets very differently depending on their size and charge state, large neutral or charged dopants being wetted by the droplet whereas small neutral impurities prefer to reside aside. Using various computational modeling tools such as quantum Monte Carlo and path-integral molecular dynamics simulations, we have revisited some aspects of the physical chemistry of helium droplets interacting with sodium impurities, including the onset of snowball formation in presence of many-body polarization forces, the transition from non-wetted to wetted behavior in larger sodium clusters, and the kinetics of submersion of small dopants after sudden ionization.

  1. Monte Carlo algorithms for Brownian phylogenetic models.

    PubMed

    Horvilleur, Benjamin; Lartillot, Nicolas

    2014-11-01

    Brownian models have been introduced in phylogenetics for describing variation in substitution rates through time, with applications to molecular dating or to the comparative analysis of variation in substitution patterns among lineages. Thus far, however, the Monte Carlo implementations of these models have relied on crude approximations, in which the Brownian process is sampled only at the internal nodes of the phylogeny or at the midpoints along each branch, and the unknown trajectory between these sampled points is summarized by simple branchwise average substitution rates. A more accurate Monte Carlo approach is introduced, explicitly sampling a fine-grained discretization of the trajectory of the (potentially multivariate) Brownian process along the phylogeny. Generic Monte Carlo resampling algorithms are proposed for updating the Brownian paths along and across branches. Specific computational strategies are developed for efficient integration of the finite-time substitution probabilities across branches induced by the Brownian trajectory. The mixing properties and the computational complexity of the resulting Markov chain Monte Carlo sampler scale reasonably with the discretization level, allowing practical applications with up to a few hundred discretization points along the entire depth of the tree. The method can be generalized to other Markovian stochastic processes, making it possible to implement a wide range of time-dependent substitution models with well-controlled computational precision. The program is freely available at www.phylobayes.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    NASA Astrophysics Data System (ADS)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  3. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  4. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  5. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  6. Path integral Monte Carlo simulations of H2 adsorbed to lithium-doped benzene: A model for hydrogen storage materials

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Kolmann, Stephen J.; D'Arcy, Jordan H.; Crittenden, Deborah L.; Jordan, Meredith J. T.

    2015-11-01

    Finite temperature quantum and anharmonic effects are studied in H2-Li+-benzene, a model hydrogen storage material, using path integral Monte Carlo (PIMC) simulations on an interpolated potential energy surface refined over the eight intermolecular degrees of freedom based upon M05-2X/6-311+G(2df,p) density functional theory calculations. Rigid-body PIMC simulations are performed at temperatures ranging from 77 K to 150 K, producing both quantum and classical probability density histograms describing the adsorbed H2. Quantum effects broaden the histograms with respect to their classical analogues and increase the expectation values of the radial and angular polar coordinates describing the location of the center-of-mass of the H2 molecule. The rigid-body PIMC simulations also provide estimates of the change in internal energy, ΔUads, and enthalpy, ΔHads, for H2 adsorption onto Li+-benzene, as a function of temperature. These estimates indicate that quantum effects are important even at room temperature and classical results should be interpreted with caution. Our results also show that anharmonicity is more important in the calculation of U and H than coupling—coupling between the intermolecular degrees of freedom becomes less important as temperature increases whereas anharmonicity becomes more important. The most anharmonic motions in H2-Li+-benzene are the "helicopter" and "ferris wheel" H2 rotations. Treating these motions as one-dimensional free and hindered rotors, respectively, provides simple corrections to standard harmonic oscillator, rigid rotor thermochemical expressions for internal energy and enthalpy that encapsulate the majority of the anharmonicity. At 150 K, our best rigid-body PIMC estimates for ΔUads and ΔHads are -13.3 ± 0.1 and -14.5 ± 0.1 kJ mol-1, respectively.

  7. Path Integral Monte Carlo Simulations of Warm Dense Matter and Plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Militzer, Burkhard

    2018-01-13

    New path integral Monte Carlo simulation (PIMC) techniques will be developed and applied to derive the equation of state (EOS) for the regime of warm dense matter and dense plasmas where existing first-principles methods cannot be applied. While standard density functional theory has been used to accurately predict the structure of many solids and liquids up to temperatures on the order of 10,000 K, this method is not applicable at much higher temperature where electronic excitations become important because the number of partially occupied electronic orbitals reaches intractably large numbers and, more importantly, the use of zero-temperature exchange-correlation functionals introducesmore » an uncontrolled approximation. Here we focus on PIMC methods that become more and more efficient with increasing temperatures and still include all electronic correlation effects. In this approach, electronic excitations increase the efficiency rather than reduce it. While it has commonly been assumed such methods can only be applied to elements without core electrons like hydrogen and helium, we recently showed how to extend PIMC to heavier elements by performing the first PIMC simulations of carbon and water plasmas [Driver, Militzer, Phys. Rev. Lett. 108 (2012) 115502]. Here we propose to continue this important development to extend the reach of PIMC simulations to yet heavier elements and also lower temperatures. The goal is to provide a robust first-principles simulation method that can accurately and efficiently study materials with excited electrons at solid-state densities in order to access parts of the phase diagram such the regime of warm dense matter and plasmas where so far only more approximate, semi-analytical methods could be applied.« less

  8. Computational study of the melting-freezing transition in the quantum hard-sphere system for intermediate densities. II. Structural features.

    PubMed

    Sesé, Luis M; Bailey, Lorna E

    2007-04-28

    The structural features of the quantum hard-sphere system in the region of the fluid-face-centered-cubic-solid transition, for reduced number densities 0.45

  9. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  10. Lateral excitonic switching in vertically stacked quantum dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarzynka, Jarosław R.; McDonald, Peter G.; Galbraith, Ian

    2016-06-14

    We show that the application of a vertical electric field to the Coulomb interacting system in stacked quantum dots leads to a 90° in-plane switching of charge probability distribution in contrast to a single dot, where no such switching exists. Results are obtained using path integral quantum Monte Carlo with realistic dot geometry, alloy composition, and piezo-electric potential profiles. The origin of the switching lies in the strain interactions between the stacked dots hence the need for more than one layer of dots. The lateral polarization and electric field dependence of the radiative lifetimes of the excitonic switch are alsomore » discussed.« less

  11. Torsional anharmonicity in the conformational thermodynamics of flexible molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F., III; Clary, David C.

    We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.

  12. Structural phase transition at high temperatures in solid molecular hydrogen and deuterium

    NASA Astrophysics Data System (ADS)

    Cui, T.; Takada, Y.; Cui, Q.; Ma, Y.; Zou, G.

    2001-07-01

    We study the effect of temperature up to 1000 K on the structure of dense molecular para-hydrogen (p-H2) and ortho-deuterium (o-D2), using the path-integral Monte Carlo method. We find a structural phase transition from orientationally disordered hexagonal close packed (hcp) to an orthorhombic structure of Cmca symmetry before melting. The transition is basically induced by thermal fluctuations, but quantum fluctuations of protons (deuterons) are important in determining the transition temperature through effectively hardening the intermolecular interaction. We estimate the phase line between hcp and Cmca phases as well as the melting line of the Cmca solid.

  13. CO oxidation reaction on Pt(111) studied by the dynamic Monte Carlo method including lateral interactions of adsorbates.

    PubMed

    Nagasaka, Masanari; Kondoh, Hiroshi; Nakai, Ikuyo; Ohta, Toshiaki

    2007-01-28

    The dynamics of adsorbate structures during CO oxidation on Pt(111) surfaces and its effects on the reaction were studied by the dynamic Monte Carlo method including lateral interactions of adsorbates. The lateral interaction energies between adsorbed species were calculated by the density functional theory method. Dynamic Monte Carlo simulations were performed for the oxidation reaction over a mesoscopic scale, where the experimentally determined activation energies of elementary paths were altered by the calculated lateral interaction energies. The simulated results reproduced the characteristics of the microscopic and mesoscopic scale adsorbate structures formed during the reaction, and revealed that the complicated reaction kinetics is comprehensively explained by a single reaction path affected by the surrounding adsorbates. We also propose from the simulations that weakly adsorbed CO molecules at domain boundaries promote the island-periphery specific reaction.

  14. Use of speckle for determining the response characteristics of Doppler imaging radars

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1986-01-01

    An optical model is developed for imaging optical radars such as the SAR on Seasat and the Shuttle Imaging Radar (SIR-B) by analyzing the Doppler shift of individual speckles in the image. The signal received at the spacecraft is treated in terms of a Fresnel-Kirchhoff integration over all backscattered radiation within a Huygen aperture at the earth. Account is taken of the movement of the spacecraft along the orbital path between emission and reception. The individual points are described by integration of the point source amplitude with a Green's function scattering kernel. Doppler data at each point furnishes the coordinates for visual representations. A Rayleigh-Poisson model of the surface scattering characteristics is used with Monte Carlo methods to generate simulations of Doppler radar speckle that compare well with Seasat SAR data SIR-B data.

  15. Social Milieu Oriented Routing: A New Dimension to Enhance Network Security in WSNs.

    PubMed

    Liu, Lianggui; Chen, Li; Jia, Huiling

    2016-02-19

    In large-scale wireless sensor networks (WSNs), in order to enhance network security, it is crucial for a trustor node to perform social milieu oriented routing to a target a trustee node to carry out trust evaluation. This challenging social milieu oriented routing with more than one end-to-end Quality of Trust (QoT) constraint has proved to be NP-complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this challenging problem. However, existing solutions cannot guarantee the efficiency of searching; that is, they can hardly avoid obtaining partial optimal solutions during a searching process. Quantum annealing (QA) uses delocalization and tunneling to avoid falling into local minima without sacrificing execution time. This has been proven a promising way to many optimization problems in recently published literatures. In this paper, for the first time, with the help of a novel approach, that is, configuration path-integral Monte Carlo (CPIMC) simulations, a QA-based optimal social trust path (QA_OSTP) selection algorithm is applied to the extraction of the optimal social trust path in large-scale WSNs. Extensive experiments have been conducted, and the experiment results demonstrate that QA_OSTP outperforms its heuristic opponents.

  16. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  17. Path-Integration Computation of the Transport Properties of Polymers Nanoparticles and Complex Biological Structures

    NASA Astrophysics Data System (ADS)

    Douglas, Jack

    2014-03-01

    One of the things that puzzled me when I was a PhD student working under Karl Freed was the curious unity between the theoretical descriptions of excluded volume interactions in polymers, the hydrodynamic properties of polymers in solution, and the critical properties of fluid mixtures, gases and diverse other materials (magnets, superfluids,etc.) when these problems were formally expressed in terms of Wiener path integration and the interactions treated through a combination of epsilon expansion and renormalization group (RG) theory. It seemed that only the interaction labels changed from one problem to the other. What do these problems have in common? Essential clues to these interrelations became apparent when Karl Freed, myself and Shi-Qing Wang together began to study polymers interacting with hyper-surfaces of continuously variable dimension where the Feynman perturbation expansions could be performed through infinite order so that we could really understand what the RG theory was doing. It is evidently simply a particular method for resuming perturbation theory, and former ambiguities no longer existed. An integral equation extension of this type of exact calculation to ``surfaces'' of arbitrary fixed shape finally revealed the central mathematical object that links these diverse physical models- the capacity of polymer chains, whose value vanishes at the critical dimension of 4 and whose magnitude is linked to the friction coefficient of polymer chains, the virial coefficient of polymers and the 4-point function of the phi-4 field theory,...Once this central object was recognized, it then became possible solve diverse problems in material science through the calculation of capacity, and related ``virials'' properties, through Monte Carlo sampling of random walk paths. The essential ideas of this computational method are discussed and some applications given to non-trivial problems: nanotubes treated as either rigid rods or ensembles worm-like chains having finite cross-section, DNA, nanoparticles with grafted chain layers and knotted polymers. The path-integration method, which grew up from research in Karl Freed's group, is evidently a powerful tool for computing basic transport properties of complex-shaped objects and should find increasing application in polymer science, nanotechnological applications and biology.

  18. An Educational MONTE CARLO Simulation/Animation Program for the Cosmic Rays Muons and a Prototype Computer-Driven Hardware Display.

    ERIC Educational Resources Information Center

    Kalkanis, G.; Sarris, M. M.

    1999-01-01

    Describes an educational software program for the study of and detection methods for the cosmic ray muons passing through several light transparent materials (i.e., water, air, etc.). Simulates muons and Cherenkov photons' paths and interactions and visualizes/animates them on the computer screen using Monte Carlo methods/techniques which employ…

  19. Gradient corrections to the exchange-correlation free energy

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-07

    We develop the first-order gradient correction to the exchange-correlation free energy of the homogeneous electron gas for use in finite-temperature density functional calculations. Based on this, we propose and implement a simple temperature-dependent extension for functionals beyond the local density approximation. These finite-temperature functionals show improvement over zero-temperature functionals, as compared to path-integral Monte Carlo calculations for deuterium equations of state, and perform without computational cost increase compared to zero-temperature functionals and so should be used for finite-temperature calculations. Furthermore, while the present functionals are valid at all temperatures including zero, non-negligible difference with zero-temperature functionals begins at temperatures abovemore » 10 000 K.« less

  20. Simulating Asymmetric Top Impurities in Superfluid Clusters: A para-Water Dopant in para-Hydrogen.

    PubMed

    Zeng, Tao; Li, Hui; Roy, Pierre-Nicholas

    2013-01-03

    We present the first simulation study of bosonic clusters doped with an asymmetric top molecule. The path-integral Monte Carlo method with the latest methodological advance in treating rigid-body rotation [Noya, E. G.; Vega, C.; McBride, C. J. Chem. Phys.2011, 134, 054117] is employed to study a para-water impurity in para-hydrogen clusters with up to 20 para-hydrogen molecules. The growth pattern of the doped clusters is similar in nature to that of pure clusters. The para-water molecule appears to rotate freely in the cluster. The presence of para-water substantially quenches the superfluid response of para-hydrogen with respect to the space-fixed frame.

  1. The uniform quantized electron gas revisited

    NASA Astrophysics Data System (ADS)

    Lomba, Enrique; Høye, Johan S.

    2017-11-01

    In this article we continue and extend our recent work on the correlation energy of the quantized electron gas of uniform density at temperature T=0 . As before, we utilize the methods, properties, and results obtained by means of classical statistical mechanics. These were extended to quantized systems via the Feynman path integral formalism. The latter translates the quantum problem into a classical polymer problem in four dimensions. Again, the well known RPA (random phase approximation) is recovered as a basic result which we then modify and improve upon. Here we analyze the condition of thermodynamic self-consistency. Our numerical calculations exhibit a remarkable agreement with well known results of a standard parameterization of Monte Carlo correlation energies.

  2. On zero variance Monte Carlo path-stretching schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lux, I.

    1983-08-01

    A zero variance path-stretching biasing scheme proposed for a special case by Dwivedi is derived in full generality. The procedure turns out to be the generalization of the exponential transform. It is shown that the biased game can be interpreted as an analog simulation procedure, thus saving some computational effort in comparison with the corresponding nonanalog game.

  3. An investigation of nonuniform dose deposition from an electron beam

    NASA Astrophysics Data System (ADS)

    Lilley, William; Luu, Kieu X.

    1994-08-01

    In a search for an explanation of nonuniform electron-beam dose deposition, the integrated tiger series (ITS) of coupled electron/photon Monte Carlo transport codes was used to calculate energy deposition in the package materials of an application-specific integrated circuit (ASIC) while the thicknesses of some of the materials were varied. The thicknesses of three materials that were in the path of an electron-beam pulse were varied independently so that analysis could determine how the radiation dose measurements using thermoluminescent dosimeters (TLD's) would be affected. The three materials were chosen because they could vary during insertion of the die into the package or during the process of taking dose measurements. The materials were aluminum, HIPEC (a plastic), and silver epoxy. The calculations showed that with very small variations in thickness, the silver epoxy had a large effect on the dose uniformity over the area of the die.

  4. A Primer in Monte Carlo Integration Using Mathcad

    ERIC Educational Resources Information Center

    Hoyer, Chad E.; Kegerreis, Jeb S.

    2013-01-01

    The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…

  5. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  6. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.

    PubMed

    Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  7. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    NASA Astrophysics Data System (ADS)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  8. Homing by path integration when a locomotion trajectory crosses itself.

    PubMed

    Yamamoto, Naohide; Meléndez, Jayleen A; Menzies, Derek T

    2014-01-01

    Path integration is a process with which navigators derive their current position and orientation by integrating self-motion signals along a locomotion trajectory. It has been suggested that path integration becomes disproportionately erroneous when the trajectory crosses itself. However, there is a possibility that this previous finding was confounded by effects of the length of a traveled path and the amount of turns experienced along the path, two factors that are known to affect path integration performance. The present study was designed to investigate whether the crossover of a locomotion trajectory truly increases errors of path integration. In an experiment, blindfolded human navigators were guided along four paths that varied in their lengths and turns, and attempted to walk directly back to the beginning of the paths. Only one of the four paths contained a crossover. Results showed that errors yielded from the path containing the crossover were not always larger than those observed in other paths, and the errors were attributed solely to the effects of longer path lengths or greater degrees of turns. These results demonstrated that path crossover does not always cause significant disruption in path integration processes. Implications of the present findings for models of path integration are discussed.

  9. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  10. Molecular simulation of the thermodynamic, structural, and vapor-liquid equilibrium properties of neon

    NASA Astrophysics Data System (ADS)

    Vlasiuk, Maryna; Frascoli, Federico; Sadus, Richard J.

    2016-09-01

    The thermodynamic, structural, and vapor-liquid equilibrium properties of neon are comprehensively studied using ab initio, empirical, and semi-classical intermolecular potentials and classical Monte Carlo simulations. Path integral Monte Carlo simulations for isochoric heat capacity and structural properties are also reported for two empirical potentials and one ab initio potential. The isobaric and isochoric heat capacities, thermal expansion coefficient, thermal pressure coefficient, isothermal and adiabatic compressibilities, Joule-Thomson coefficient, and the speed of sound are reported and compared with experimental data for the entire range of liquid densities from the triple point to the critical point. Lustig's thermodynamic approach is formally extended for temperature-dependent intermolecular potentials. Quantum effects are incorporated using the Feynman-Hibbs quantum correction, which results in significant improvement in the accuracy of predicted thermodynamic properties. The new Feynman-Hibbs version of the Hellmann-Bich-Vogel potential predicts the isochoric heat capacity to an accuracy of 1.4% over the entire range of liquid densities. It also predicts other thermodynamic properties more accurately than alternative intermolecular potentials.

  11. Computational Physics' Greatest Hits

    NASA Astrophysics Data System (ADS)

    Bug, Amy

    2011-03-01

    The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.

  12. Monte Carlo simulation of the mixed alkali effect with cooperative jumps

    NASA Astrophysics Data System (ADS)

    Habasaki, Junko; Hiwatari, Yasuaki

    2000-12-01

    In our previous works on molecular dynamics (MD) simulations of lithium metasilicate (Li2SiO3), it has been shown that the long time behavior of the lithium ions in Li2SiO3 has been characterized by the component showing the enhanced diffusion (Lévy flight) due to cooperative jumps. It has also been confirmed that the contribution of such component decreases by interception of the paths in the mixed alkali silicate (LiKSiO3). Namely, cooperative jumps of like ions are much decreased in number owing to the interception of the path for unlike alkali-metal ions. In the present work, we have performed a Monte Carlo simulation using a cubic lattice in order to establish the role of the cooperative jumps in the transport properties in a mixed alkali glass. Fixed particles (blockage) were introduced instead of the interception of the jump paths for unlike alkali-metal ions. Two types of cooperative motions (a pull type and a push type) were taken into account. Low-dimensionality of the jump path caused by blockage resulted in a decrease of a diffusion coefficient of the particles. The effect of blockage is enhanced when the cooperative motions were introduced.

  13. Development of volume equations using data obtained by upper stem dendrometry with Monte Carlo integration: preliminary results for eastern redcedar

    Treesearch

    Thomas B. Lynch; Rodney E. Will; Rider Reynolds

    2013-01-01

    Preliminary results are given for development of an eastern redcedar (Juniperus virginiana) cubic-volume equation based on measurements of redcedar sample tree stem volume using dendrometry with Monte Carlo integration. Monte Carlo integration techniques can be used to provide unbiased estimates of stem cubic-foot volume based on upper stem diameter...

  14. An integrated biomechanical modeling approach to the ergonomic evaluation of drywall installation.

    PubMed

    Yuan, Lu; Buchholz, Bryan; Punnett, Laura; Kriebel, David

    2016-03-01

    Three different methodologies: work sampling, computer simulation and biomechanical modeling, were integrated to study the physical demands of drywall installation. PATH (Posture, Activity, Tools, and Handling), a work-sampling based method, was used to quantify the percent of time that the drywall installers were conducting different activities with different body segment (trunk, arm, and leg) postures. Utilizing Monte-Carlo simulation to convert the categorical PATH data into continuous variables as inputs for the biomechanical models, the required muscle contraction forces and joint reaction forces at the low back (L4/L5) and shoulder (glenohumeral and sternoclavicular joints) were estimated for a typical eight-hour workday. To demonstrate the robustness of this modeling approach, a sensitivity analysis was conducted to examine the impact of some quantitative assumptions that have been made to facilitate the modeling approach. The results indicated that the modeling approach seemed to be the most sensitive to both the distribution of work cycles for a typical eight-hour workday and the distribution and values of Euler angles that are used to determine the "shoulder rhythm." Other assumptions including the distribution of trunk postures did not appear to have a significant impact on the model outputs. It was concluded that the integrated approach might provide an applicable examination of physical loads during the non-routine construction work, especially for those operations/tasks that have certain patterns/sequences for the workers to follow. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Characterizing the Trade Space Between Capability and Complexity in Next Generation Cloud and Precipitation Observing Systems Using Markov Chain Monte Carlos Techniques

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Mace, G. G.; Posselt, D. J.

    2017-12-01

    As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.

  16. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  17. Infinite variance in fermion quantum Monte Carlo calculations.

    PubMed

    Shi, Hao; Zhang, Shiwei

    2016-03-01

    For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methodsmore » and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less

  19. Theory of Financial Risk and Derivative Pricing

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2009-01-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  20. Theory of Financial Risk and Derivative Pricing - 2nd Edition

    NASA Astrophysics Data System (ADS)

    Bouchaud, Jean-Philippe; Potters, Marc

    2003-12-01

    Foreword; Preface; 1. Probability theory: basic notions; 2. Maximum and addition of random variables; 3. Continuous time limit, Ito calculus and path integrals; 4. Analysis of empirical data; 5. Financial products and financial markets; 6. Statistics of real prices: basic results; 7. Non-linear correlations and volatility fluctuations; 8. Skewness and price-volatility correlations; 9. Cross-correlations; 10. Risk measures; 11. Extreme correlations and variety; 12. Optimal portfolios; 13. Futures and options: fundamental concepts; 14. Options: hedging and residual risk; 15. Options: the role of drift and correlations; 16. Options: the Black and Scholes model; 17. Options: some more specific problems; 18. Options: minimum variance Monte-Carlo; 19. The yield curve; 20. Simple mechanisms for anomalous price statistics; Index of most important symbols; Index.

  1. Combining path integration and remembered landmarks when navigating without vision.

    PubMed

    Kalia, Amy A; Schrater, Paul R; Legge, Gordon E

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.

  2. Combining Path Integration and Remembered Landmarks When Navigating without Vision

    PubMed Central

    Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742

  3. Using MathCad to Evaluate Exact Integral Formulations of Spacecraft Orbital Heats for Primitive Surfaces at Any Orientation

    NASA Technical Reports Server (NTRS)

    Pinckney, John

    2010-01-01

    With the advent of high speed computing Monte Carlo ray tracing techniques has become the preferred method for evaluating spacecraft orbital heats. Monte Carlo has its greatest advantage where there are many interacting surfaces. However Monte Carlo programs are specialized programs that suffer from some inaccuracy, long calculation times and high purchase cost. A general orbital heating integral is presented here that is accurate, fast and runs on MathCad, a generally available engineering mathematics program. The integral is easy to read, understand and alter. The integral can be applied to unshaded primitive surfaces at any orientation. The method is limited to direct heating calculations. This integral formulation can be used for quick orbit evaluations and spot checking Monte Carlo results.

  4. Monte Carlo methods for multidimensional integration for European option pricing

    NASA Astrophysics Data System (ADS)

    Todorov, V.; Dimov, I. T.

    2016-10-01

    In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.

  5. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. ICF target 2D modeling using Monte Carlo SNB electron thermal transport in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2016-10-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup diffusion electron thermal transport method is adapted into a Monte Carlo (MC) transport method to better model angular and long mean free path non-local effects. The MC model was first implemented in the 1D LILAC code to verify consistency with the iSNB model. Implementation of the MC SNB model in the 2D DRACO code enables higher fidelity non-local thermal transport modeling in 2D implosions such as polar drive experiments on NIF. The final step is to optimize the MC model by hybridizing it with a MC version of the iSNB diffusion method. The hybrid method will combine the efficiency of a diffusion method in intermediate mean free path regions with the accuracy of a transport method in long mean free path regions allowing for improved computational efficiency while maintaining accuracy. Work to date on the method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  7. Real-space finite-difference approach for multi-body systems: path-integral renormalization group method and direct energy minimization method.

    PubMed

    Sasaki, Akira; Kojo, Masashi; Hirose, Kikuji; Goto, Hidekazu

    2011-11-02

    The path-integral renormalization group and direct energy minimization method of practical first-principles electronic structure calculations for multi-body systems within the framework of the real-space finite-difference scheme are introduced. These two methods can handle higher dimensional systems with consideration of the correlation effect. Furthermore, they can be easily extended to the multicomponent quantum systems which contain more than two kinds of quantum particles. The key to the present methods is employing linear combinations of nonorthogonal Slater determinants (SDs) as multi-body wavefunctions. As one of the noticeable results, the same accuracy as the variational Monte Carlo method is achieved with a few SDs. This enables us to study the entire ground state consisting of electrons and nuclei without the need to use the Born-Oppenheimer approximation. Recent activities on methodological developments aiming towards practical calculations such as the implementation of auxiliary field for Coulombic interaction, the treatment of the kinetic operator in imaginary-time evolutions, the time-saving double-grid technique for bare-Coulomb atomic potentials and the optimization scheme for minimizing the total-energy functional are also introduced. As test examples, the total energy of the hydrogen molecule, the atomic configuration of the methylene and the electronic structures of two-dimensional quantum dots are calculated, and the accuracy, availability and possibility of the present methods are demonstrated.

  8. Kinetic Monte Carlo Simulation of Oxygen Diffusion in Ytterbium Disilicate

    NASA Astrophysics Data System (ADS)

    Good, Brian

    2015-03-01

    Ytterbium disilicate is of interest as a potential environmental barrier coating for aerospace applications, notably for use in next generation jet turbine engines. In such applications, the diffusion of oxygen and water vapor through these coatings is undesirable if high temperature corrosion is to be avoided. In an effort to understand the diffusion process in these materials, we have performed kinetic Monte Carlo simulations of vacancy-mediated oxygen diffusion in Ytterbium Disilicate. Oxygen vacancy site energies and diffusion barrier energies are computed using Density Functional Theory. We find that many potential diffusion paths involve large barrier energies, but some paths have barrier energies smaller than one electron volt. However, computed vacancy formation energies suggest that the intrinsic vacancy concentration is small in the pure material, with the result that the material is unlikely to exhibit significant oxygen permeability.

  9. Combined Effect of Random Transmit Power Control and Inter-Path Interference Cancellation on DS-CDMA Packet Mobile Communications

    NASA Astrophysics Data System (ADS)

    Kudoh, Eisuke; Ito, Haruki; Wang, Zhisen; Adachi, Fumiyuki

    In mobile communication systems, high speed packet data services are demanded. In the high speed data transmission, throughput degrades severely due to severe inter-path interference (IPI). Recently, we proposed a random transmit power control (TPC) to increase the uplink throughput of DS-CDMA packet mobile communications. In this paper, we apply IPI cancellation in addition to the random TPC. We derive the numerical expression of the received signal-to-interference plus noise power ratio (SINR) and introduce IPI cancellation factor. We also derive the numerical expression of system throughput when IPI is cancelled ideally to compare with the Monte Carlo numerically evaluated system throughput. Then we evaluate, by Monte-Carlo numerical computation method, the combined effect of random TPC and IPI cancellation on the uplink throughput of DS-CDMA packet mobile communications.

  10. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  11. Semiclassical propagation of Wigner functions.

    PubMed

    Dittrich, T; Gómez, E A; Pachón, L A

    2010-06-07

    We present a comprehensive study of semiclassical phase-space propagation in the Wigner representation, emphasizing numerical applications, in particular as an initial-value representation. Two semiclassical approximation schemes are discussed. The propagator of the Wigner function based on van Vleck's approximation replaces the Liouville propagator by a quantum spot with an oscillatory pattern reflecting the interference between pairs of classical trajectories. Employing phase-space path integration instead, caustics in the quantum spot are resolved in terms of Airy functions. We apply both to two benchmark models of nonlinear molecular potentials, the Morse oscillator and the quartic double well, to test them in standard tasks such as computing autocorrelation functions and propagating coherent states. The performance of semiclassical Wigner propagation is very good even in the presence of marked quantum effects, e.g., in coherent tunneling and in propagating Schrodinger cat states, and of classical chaos in four-dimensional phase space. We suggest options for an effective numerical implementation of our method and for integrating it in Monte-Carlo-Metropolis algorithms suitable for high-dimensional systems.

  12. Diagrammatic Monte Carlo study of Fröhlich polaron dispersion in two and three dimensions

    NASA Astrophysics Data System (ADS)

    Hahn, Thomas; Klimin, Sergei; Tempere, Jacques; Devreese, Jozef T.; Franchini, Cesare

    2018-04-01

    We present results for the solution of the large polaron Fröhlich Hamiltonian in 3 dimensions (3D) and 2 dimensions (2D) obtained via the diagrammatic Monte Carlo (DMC) method. Our implementation is based on the approach by Mishchenko [A. S. Mishchenko et al., Phys. Rev. B 62, 6317 (2000), 10.1103/PhysRevB.62.6317]. Polaron ground state energies and effective polaron masses are successfully benchmarked with data obtained using Feynman's path integral formalism. By comparing 3D and 2D data, we verify the analytically exact scaling relations for energies and effective masses from 3 D →2 D , which provides a stringent test for the quality of DMC predictions. The accuracy of our results is further proven by providing values for the exactly known coefficients in weak- and strong-coupling expansions. Moreover, we compute polaron dispersion curves which are validated with analytically known lower and upper limits in the small-coupling regime and verify the first-order expansion results for larger couplings, thus disproving previous critiques on the apparent incompatibility of DMC with analytical results and furnishing useful reference for a wide range of coupling strengths.

  13. Differential pencil beam dose computation model for photons.

    PubMed

    Mohan, R; Chui, C; Lidofsky, L

    1986-01-01

    Differential pencil beam (DPB) is defined as the dose distribution relative to the position of the first collision, per unit collision density, for a monoenergetic pencil beam of photons in an infinite homogeneous medium of unit density. We have generated DPB dose distribution tables for a number of photon energies in water using the Monte Carlo method. The three-dimensional (3D) nature of the transport of photons and electrons is automatically incorporated in DPB dose distributions. Dose is computed by evaluating 3D integrals of DPB dose. The DPB dose computation model has been applied to calculate dose distributions for 60Co and accelerator beams. Calculations for the latter are performed using energy spectra generated with the Monte Carlo program. To predict dose distributions near the beam boundaries defined by the collimation system as well as blocks, we utilize the angular distribution of incident photons. Inhomogeneities are taken into account by attenuating the primary photon fluence exponentially utilizing the average total linear attenuation coefficient of intervening tissue, by multiplying photon fluence by the linear attenuation coefficient to yield the number of collisions in the scattering volume, and by scaling the path between the scattering volume element and the computation point by an effective density.

  14. Full-dimensional quantum calculations of the dissociation energy, zero-point, and 10 K properties of H7+/D7+ clusters using an ab initio potential energy surface.

    PubMed

    Barragán, Patricia; Pérez de Tudela, Ricardo; Qu, Chen; Prosmiti, Rita; Bowman, Joel M

    2013-07-14

    Diffusion Monte Carlo (DMC) and path-integral Monte Carlo computations of the vibrational ground state and 10 K equilibrium state properties of the H7 (+)/D7 (+) cations are presented, using an ab initio full-dimensional potential energy surface. The DMC zero-point energies of dissociated fragments H5 (+)(D5 (+))+H2(D2) are also calculated and from these results and the electronic dissociation energy, dissociation energies, D0, of 752 ± 15 and 980 ± 14 cm(-1) are reported for H7 (+) and D7 (+), respectively. Due to the known error in the electronic dissociation energy of the potential surface, these quantities are underestimated by roughly 65 cm(-1). These values are rigorously determined for first time, and compared with previous theoretical estimates from electronic structure calculations using standard harmonic analysis, and available experimental measurements. Probability density distributions are also computed for the ground vibrational and 10 K state of H7 (+) and D7 (+). These are qualitatively described as a central H3 (+)/D3 (+) core surrounded by "solvent" H2/D2 molecules that nearly freely rotate.

  15. DNA-DNA interaction beyond the ground state

    NASA Astrophysics Data System (ADS)

    Lee, D. J.; Wynveen, A.; Kornyshev, A. A.

    2004-11-01

    The electrostatic interaction potential between DNA duplexes in solution is a basis for the statistical mechanics of columnar DNA assemblies. It may also play an important role in recombination of homologous genes. We develop a theory of this interaction that includes thermal torsional fluctuations of DNA using field-theoretical methods and Monte Carlo simulations. The theory extends and rationalizes the earlier suggested variational approach which was developed in the context of a ground state theory of interaction of nonhomologous duplexes. It shows that the heuristic variational theory is equivalent to the Hartree self-consistent field approximation. By comparison of the Hartree approximation with an exact solution based on the QM analogy of path integrals, as well as Monte Carlo simulations, we show that this easily analytically-tractable approximation works very well in most cases. Thermal fluctuations do not remove the ability of DNA molecules to attract each other at favorable azimuthal conformations, neither do they wash out the possibility of electrostatic “snap-shot” recognition of homologous sequences, considered earlier on the basis of ground state calculations. At short distances DNA molecules undergo a “torsional alignment transition,” which is first order for nonhomologous DNA and weaker order for homologous sequences.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiaoyao; Hall, Randall W.; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of othermore » quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less

  17. On the mode-coupling treatment of collective density fluctuations for quantum liquids: para-hydrogen and normal liquid helium.

    PubMed

    Kletenik-Edelman, Orly; Reichman, David R; Rabani, Eran

    2011-01-28

    A novel quantum mode coupling theory combined with a kinetic approach is developed for the description of collective density fluctuations in quantum liquids characterized by Boltzmann statistics. Three mode-coupling approximations are presented and applied to study the dynamic response of para-hydrogen near the triple point and normal liquid helium above the λ-transition. The theory is compared with experimental results and to the exact imaginary time data generated by path integral Monte Carlo simulations. While for liquid para-hydrogen the combination of kinetic and quantum mode-coupling theory provides semi-quantitative results for both short and long time dynamics, it fails for normal liquid helium. A discussion of this failure based on the ideal gas limit is presented.

  18. Theoretical investigations of open-shell systems: 1. Spectral simulation of the 2s(2)p(2) (2)D <- 2s(2)2p (2)P(o) valence transition in the boron diargon cluster, and 2. Quantum Monte Carlo calculations of boron in solid molecular hydrogen

    NASA Astrophysics Data System (ADS)

    Krumrine, Jennifer Rebecca

    This dissertation is concerned in part with the construction of accurate pairwise potentials, based on reliable ab initio potential energy surfaces (PES's), which are fully anisotropic in the sense that multiple PES's are accessible to systems with orientational electronic properties. We have carried out several investigations of B (2s 22p 2Po) with spherical ligands: (1)an investigation of the electronic spectrum of the BAr2 complex and (2)two related studies of the equilibrium properties and spectral simulation of B embedded in solid pH 2. Our investigations suggest that it cannot be assumed that nuclear motion in an open-shell system occurs on a single PES. The 2s2p2 2 D <-- 2s22p 2Po valence transition in the BAr 2 cluster is investigated. The electronic transition within BAr 2 is modeled theoretically; the excited potential energy surfaces of the five-fold degenerate B(2s2p2 2D) state within the ternary complex are computed using a pairwise-additive model. A collaborative path integral molecular dynamics investigation of the equilibrium properties of boron trapped in solid para-hydrogen (pH2) and a path integral Monte Carlo spectral simulation. Using fully anisotropic pair potentials, coupling of the electronic and nuclear degrees of freedom is observed, and is found to be an essential feature in understanding the behavior and determining the energy of the impure solid, especially in highly anisotropic matrices. We employ the variational Monte Carlo method to further study the behavior of ground state B embedded in solid pH2. When a boron atom exists in a substitutional site in a lattice, the anisotropic distortion of the local lattice plays a minimal role in the energetics. However, when a nearest neighbor vacancy is present along with the boron impurity, two phenomena are found to influence the behavior of the impure quantum solid: (1)orientation of the 2p orbital to minimize the energy of the impurity and (2)distortion of the local lattice structure to promote an energetically favorable nuclear configuration. This research was supported by the Joint Program for Atomic, Molecular and Optical Science sponsored by the University of Maryland at College Park and the National Insititute of Standards and Technology, and by the U.S. Air Force Office of Scientific Research. (Abstract shortened by UMI.)

  19. Path Integration on the Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1987-10-01

    Feynman's path integral is considered on the Poincaré upper half-plane. It is shown that the fundermental solution to the heat equation partial f/partial t=Delta_{H}f can be expressed in terms of a path integral. A simple relation between the path integral and the Selberg trace formula is discussed briefly.

  20. Random Numbers and Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  1. Path integration of head direction: updating a packet of neural activity at the correct speed using axonal conduction delays.

    PubMed

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a "look-up" table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity.

  2. Path Integration of Head Direction: Updating a Packet of Neural Activity at the Correct Speed Using Axonal Conduction Delays

    PubMed Central

    Walters, Daniel; Stringer, Simon; Rolls, Edmund

    2013-01-01

    The head direction cell system is capable of accurately updating its current representation of head direction in the absence of visual input. This is known as the path integration of head direction. An important question is how the head direction cell system learns to perform accurate path integration of head direction. In this paper we propose a model of velocity path integration of head direction in which the natural time delay of axonal transmission between a linked continuous attractor network and competitive network acts as a timing mechanism to facilitate the correct speed of path integration. The model effectively learns a “look-up” table for the correct speed of path integration. In simulation, we show that the model is able to successfully learn two different speeds of path integration across two different axonal conduction delays, and without the need to alter any other model parameters. An implication of this model is that, by learning look-up tables for each speed of path integration, the model should exhibit a degree of robustness to damage. In simulations, we show that the speed of path integration is not significantly affected by degrading the network through removing a proportion of the cells that signal rotational velocity. PMID:23526976

  3. A kMC-MD method with generalized move-sets for the simulation of folding of α-helical and β-stranded peptides.

    PubMed

    Peter, Emanuel K; Pivkin, Igor V; Shea, Joan-Emma

    2015-04-14

    In Monte-Carlo simulations of protein folding, pathways and folding times depend on the appropriate choice of the Monte-Carlo move or process path. We developed a generalized set of process paths for a hybrid kinetic Monte Carlo-Molecular dynamics algorithm, which makes use of a novel constant time-update and allows formation of α-helical and β-stranded secondary structures. We apply our new algorithm to the folding of 3 different proteins: TrpCage, GB1, and TrpZip4. All three systems are seen to fold within the range of the experimental folding times. For the β-hairpins, we observe that loop formation is the rate-determining process followed by collapse and formation of the native core. Cluster analysis of both peptides reveals that GB1 folds with equal likelihood along a zipper or a hydrophobic collapse mechanism, while TrpZip4 follows primarily a zipper pathway. The difference observed in the folding behavior of the two proteins can be attributed to the different arrangements of their hydrophobic core, strongly packed, and dry in case of TrpZip4, and partially hydrated in the case of GB1.

  4. Path integral Monte Carlo simulations of dense carbon-hydrogen plasmas

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Militzer, Burkhard; Benedict, Lorin X.; Soubiran, François; Sterne, Philip A.; Driver, Kevin P.

    2018-03-01

    Carbon-hydrogen plasmas and hydrocarbon materials are of broad interest to laser shock experimentalists, high energy density physicists, and astrophysicists. Accurate equations of state (EOSs) of hydrocarbons are valuable for various studies from inertial confinement fusion to planetary science. By combining path integral Monte Carlo (PIMC) results at high temperatures and density functional theory molecular dynamics results at lower temperatures, we compute the EOSs for hydrocarbons from simulations performed at 1473 separate (ρ, T)-points distributed over a range of compositions. These methods accurately treat electronic excitation effects with neither adjustable parameter nor experimental input. PIMC is also an accurate simulation method that is capable of treating many-body interaction and nuclear quantum effects at finite temperatures. These methods therefore provide a benchmark-quality EOS that surpasses that of semi-empirical and Thomas-Fermi-based methods in the warm dense matter regime. By comparing our first-principles EOS to the LEOS 5112 model for CH, we validate the specific heat assumptions in this model but suggest that the Grüneisen parameter is too large at low temperatures. Based on our first-principles EOSs, we predict the principal Hugoniot curve of polystyrene to be 2%-5% softer at maximum shock compression than that predicted by orbital-free density functional theory and SESAME 7593. By investigating the atomic structure and chemical bonding of hydrocarbons, we show a drastic decrease in the lifetime of chemical bonds in the pressure interval from 0.4 to 4 megabar. We find the assumption of linear mixing to be valid for describing the EOS and the shock Hugoniot curve of hydrocarbons in the regime of partially ionized atomic liquids. We make predictions of the shock compression of glow-discharge polymers and investigate the effects of oxygen content and C:H ratio on its Hugoniot curve. Our full suite of first-principles simulation results may be used to benchmark future theoretical investigations pertaining to hydrocarbon EOSs and should be helpful in guiding the design of future experiments on hydrocarbons in the gigabar regime.

  5. Monte Carlo simulation for kinetic chemotaxis model: An application to the traveling population wave

    NASA Astrophysics Data System (ADS)

    Yasuda, Shugo

    2017-02-01

    A Monte Carlo simulation of chemotactic bacteria is developed on the basis of the kinetic model and is applied to a one-dimensional traveling population wave in a microchannel. In this simulation, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to calculate the macroscopic transport of the chemical cues in the environment. The simulation method can successfully reproduce the traveling population wave of bacteria that was observed experimentally and reveal the microscopic dynamics of bacterium coupled with the macroscopic transports of the chemical cues and bacteria population density. The results obtained by the Monte Carlo method are also compared with the asymptotic solution derived from the kinetic chemotaxis equation in the continuum limit, where the Knudsen number, which is defined by the ratio of the mean free path of bacterium to the characteristic length of the system, vanishes. The validity of the Monte Carlo method in the asymptotic behaviors for small Knudsen numbers is numerically verified.

  6. Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Iwano, Takayuki; Umeyama, Shinji

    2015-12-01

    fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.

  7. Geometry, Heat Equation and Path Integrals on the Poincaré Upper Half-Plane

    NASA Astrophysics Data System (ADS)

    Kubo, R.

    1988-01-01

    Geometry, heat equation and Feynman's path integrals are studied on the Poincaré upper half-plane. The fundamental solution to the heat equation partial f/partial t = Delta_{H} f is expressed in terms of a path integral defined on the upper half-plane. It is shown that Kac's statement that Feynman's path integral satisfies the Schrödinger equation is also valid for our case.

  8. Moment Lyapunov Exponent and Stochastic Stability of Binary Airfoil under Combined Harmonic and Non-Gaussian Colored Noise Excitations

    NASA Astrophysics Data System (ADS)

    Hu, D. L.; Liu, X. B.

    Both periodic loading and random forces commonly co-exist in real engineering applications. However, the dynamic behavior, especially dynamic stability of systems under parametric periodic and random excitations has been reported little in the literature. In this study, the moment Lyapunov exponent and stochastic stability of binary airfoil under combined harmonic and non-Gaussian colored noise excitations are investigated. The noise is simplified to an Ornstein-Uhlenbeck process by applying the path-integral method. Via the singular perturbation method, the second-order expansions of the moment Lyapunov exponent are obtained, which agree well with the results obtained by the Monte Carlo simulation. Finally, the effects of the noise and parametric resonance (such as subharmonic resonance and combination additive resonance) on the stochastic stability of the binary airfoil system are discussed.

  9. First-Principles Calculation of the Third Virial Coefficient of Helium

    PubMed Central

    Garberoglio, Giovanni; Harvey, Allan H.

    2009-01-01

    Knowledge of the pair and three-body potential-energy surfaces of helium is now sufficient to allow calculation of the third density virial coefficient, C(T), with significantly smaller uncertainty than that of existing experimental data. In this work, we employ the best available pair and three-body potentials for helium and calculate C(T) with path-integral Monte Carlo (PIMC) calculations supplemented by semiclassical calculations. The values of C(T) presented extend from 24.5561 K to 10 000 K. In the important metrological range of temperatures near 273.16 K, our uncertainties are smaller than the best experimental results by approximately an order of magnitude, and the reduction in uncertainty at other temperatures is at least as great. For convenience in calculation of C(T) and its derivatives, a simple correlating equation is presented. PMID:27504226

  10. Bose Condensation at He-4 Interfaces

    NASA Technical Reports Server (NTRS)

    Draeger, E. W.; Ceperley, D. M.

    2003-01-01

    Path Integral Monte Carlo was used to calculate the Bose-Einstein condensate fraction at the surface of a helium film at T = 0:77 K, as a function of density. Moving from the center of the slab to the surface, the condensate fraction was found to initially increase with decreasing density to a maximum value of 0.9, before decreasing. Long wavelength density correlations were observed in the static structure factor at the surface of the slab. A surface dispersion relation was calculated from imaginary-time density-density correlations. Similar calculations of the superfluid density throughout He-4 droplets doped with linear impurities (HCN)(sub n) are presented. After deriving a local estimator for the superfluid density distribution, we find a decreased superfluid response in the first solvation layer. This effective normal fluid exhibits temperature dependence similar to that of a two-dimensional helium system.

  11. Highly efficient luminescent solar concentrators based on earth-abundant indirect-bandgap silicon quantum dots

    NASA Astrophysics Data System (ADS)

    Meinardi, Francesco; Ehrenberg, Samantha; Dhamo, Lorena; Carulli, Francesco; Mauri, Michele; Bruni, Francesco; Simonutti, Roberto; Kortshagen, Uwe; Brovelli, Sergio

    2017-02-01

    Building-integrated photovoltaics is gaining consensus as a renewable energy technology for producing electricity at the point of use. Luminescent solar concentrators (LSCs) could extend architectural integration to the urban environment by realizing electrode-less photovoltaic windows. Crucial for large-area LSCs is the suppression of reabsorption losses, which requires emitters with negligible overlap between their absorption and emission spectra. Here, we demonstrate the use of indirect-bandgap semiconductor nanostructures such as highly emissive silicon quantum dots. Silicon is non-toxic, low-cost and ultra-earth-abundant, which avoids the limitations to the industrial scaling of quantum dots composed of low-abundance elements. Suppressed reabsorption and scattering losses lead to nearly ideal LSCs with an optical efficiency of η = 2.85%, matching state-of-the-art semi-transparent LSCs. Monte Carlo simulations indicate that optimized silicon quantum dot LSCs have a clear path to η > 5% for 1 m2 devices. We are finally able to realize flexible LSCs with performances comparable to those of flat concentrators, which opens the way to a new design freedom for building-integrated photovoltaics elements.

  12. Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx

    2015-07-23

    The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.

  13. Relative importance of multiple scattering by air molecules and aerosols in forming the atmospheric path radiance in the visible and near-infrared parts of the spectrum.

    PubMed

    Antoine, D; Morel, A

    1998-04-20

    Single and multiple scattering by molecules or by atmospheric aerosols only (homogeneous scattering), and heterogeneous scattering by aerosols and molecules, are recorded in Monte Carlo simulations. It is shown that heterogeneous scattering (1) always contributes significantly to the path reflectance (rho(path)), (2) is realized at the expense of homogeneous scattering, (3) decreases when aerosols are absorbing, and (4) introduces deviations in the spectral dependencies of reflectances compared with the Rayleigh exponent and the aerosol angstrom exponent. The ratio of rho(path) to the Rayleigh reflectance for an aerosol-free atmosphere is linearly related to the aerosol optical thickness. This result provides a basis for a new scheme for atmospheric correction of remotely sensed ocean color observations.

  14. Path integration on the hyperbolic plane with a magnetic field

    NASA Astrophysics Data System (ADS)

    Grosche, Christian

    1990-08-01

    In this paper I discuss the path integrals on three formulations of hyperbolic geometry, where a constant magnetic field B is included. These are: the pseudosphere Λ2, the Poincaré disc D, and the hyperbolic strip S. The corresponding path integrals can be reformulated in terms of the path integral for the modified Pöschl-Teller potential. The wave-functions and the energy spectrum for the discrete and continuous part of the spectrum are explicitly calculated in each case. First the results are compared for the limit B → 0 with previous calculations and second with the path integration on the Poincaré upper half-plane U. This work is a continuation of the path integral calculations for the free motion on the various formulations on the hyperbolic plane and for the case of constant magnetic field on the Poincaré upper half-plane U.

  15. Dissociable cognitive mechanisms underlying human path integration.

    PubMed

    Wiener, Jan M; Berthoz, Alain; Wolbers, Thomas

    2011-01-01

    Path integration is a fundamental mechanism of spatial navigation. In non-human species, it is assumed to be an online process in which a homing vector is updated continuously during an outward journey. In contrast, human path integration has been conceptualized as a configural process in which travelers store working memory representations of path segments, with the computation of a homing vector only occurring when required. To resolve this apparent discrepancy, we tested whether humans can employ different path integration strategies in the same task. Using a triangle completion paradigm, participants were instructed either to continuously update the start position during locomotion (continuous strategy) or to remember the shape of the outbound path and to calculate home vectors on basis of this representation (configural strategy). While overall homing accuracy was superior in the configural condition, participants were quicker to respond during continuous updating, strongly suggesting that homing vectors were computed online. Corroborating these findings, we observed reliable differences in head orientation during the outbound path: when participants applied the continuous updating strategy, the head deviated significantly from straight ahead in direction of the start place, which can be interpreted as a continuous motor expression of the homing vector. Head orientation-a novel online measure for path integration-can thus inform about the underlying updating mechanism already during locomotion. In addition to demonstrating that humans can employ different cognitive strategies during path integration, our two-systems view helps to resolve recent controversies regarding the role of the medial temporal lobe in human path integration.

  16. Time-dependent integral equations of neutron transport for calculating the kinetics of nuclear reactors by the Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidenko, V. D., E-mail: Davidenko-VD@nrcki.ru; Zinchenko, A. S., E-mail: zin-sn@mail.ru; Harchenko, I. K.

    2016-12-15

    Integral equations for the shape functions in the adiabatic, quasi-static, and improved quasi-static approximations are presented. The approach to solving these equations by the Monte Carlo method is described.

  17. Search for anisotropy in the Debye-Waller factor of HCP solid 4He

    NASA Astrophysics Data System (ADS)

    Barnes, Ashleigh L.; Hinde, Robert J.

    2016-02-01

    The properties of hexagonal close packed (hcp) solid 4He are dominated by large atomic zero point motions. An accurate description of these motions is therefore necessary in order to accurately calculate the properties of the system, such as the Debye-Waller (DW) factors. A recent neutron scattering experiment reported significant anisotropy in the in-plane and out-of-plane DW factors for hcp solid 4He at low temperatures, where thermal effects are negligible and only zero-point motions are expected to contribute. By contrast, no such anisotropy was observed either in earlier experiments or in path integral Monte Carlo (PIMC) simulations of solid hcp 4He. However, the earlier experiments and the PIMC simulations were both carried out at higher temperatures where thermal effects could be substantial. We seek to understand the cause of this discrepancy through variational quantum Monte Carlo simulations utilizing an accurate pair potential and a modified trial wavefunction which allows for anisotropy. Near the melting density, we find no anisotropy in an ideal hcp 4He crystal. A theoretical equation of state is derived from the calculated energies of the ideal crystal over a range of molar volumes from 7.88 to 21.3 cm3, and is found to be in good qualitative agreement with experimental data.

  18. On the room-temperature phase diagram of high pressure hydrogen: an ab initio molecular dynamics perspective and a diffusion Monte Carlo study.

    PubMed

    Chen, Ji; Ren, Xinguo; Li, Xin-Zheng; Alfè, Dario; Wang, Enge

    2014-07-14

    The finite-temperature phase diagram of hydrogen in the region of phase IV and its neighborhood was studied using the ab initio molecular dynamics (MD) and the ab initio path-integral molecular dynamics (PIMD). The electronic structures were analyzed using the density-functional theory (DFT), the random-phase approximation, and the diffusion Monte Carlo (DMC) methods. Taking the state-of-the-art DMC results as benchmark, comparisons of the energy differences between structures generated from the MD and PIMD simulations, with molecular and dissociated hydrogens, respectively, in the weak molecular layers of phase IV, indicate that standard functionals in DFT tend to underestimate the dissociation barrier of the weak molecular layers in this mixed phase. Because of this underestimation, inclusion of the quantum nuclear effects (QNEs) in PIMD using electronic structures generated with these functionals leads to artificially dissociated hydrogen layers in phase IV and an error compensation between the neglect of QNEs and the deficiencies of these functionals in standard ab initio MD simulations exists. This analysis partly rationalizes why earlier ab initio MD simulations complement so well the experimental observations. The temperature and pressure dependencies for the stability of phase IV were also studied in the end and compared with earlier results.

  19. A geometrical optics approach for modeling atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Heba; Atia, Walid; Davis, Christopher C.

    2005-08-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes the optical phasefront to become distorted from propagation through turbulent eddies of varying sizes and refractive index. Turbulence also results in intensity scintillation and beam wander, which can severely impair the operation of target designation and free space optical (FSO) communications systems. We have developed a new model to assess the effects of turbulence on laser beam propagation in such applications. We model the atmosphere along the laser beam propagation path as a spatial distribution of spherical bubbles or curved interfaces. The size and refractive index discontinuity represented by each bubble are statistically distributed according to various models. For each statistical representation of the atmosphere, the path of a single ray, or a bundle of rays, is analyzed using geometrical optics. These Monte Carlo techniques allow us to assess beam wander, beam spread, and phase shifts along the path. An effective Cn2 can be determined by correlating beam wander behavior with the path length. This model has already proved capable of assessing beam wander, in particular the (Range)3 dependence of mean-squared beam wander, and in estimating lateral phase decorrelations that develop across the laser phasefront as it propagates through turbulence. In addition, we have developed efficient computational techniques for various correlation functions that are important in assessing the effects of turbulence. The Monte Carlo simulations are compared and show good agreement with the predictions of wave theory.

  20. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  1. Course 4: Anyons

    NASA Astrophysics Data System (ADS)

    Myrheim, J.

    Contents 1 Introduction 1.1 The concept of particle statistics 1.2 Statistical mechanics and the many-body problem 1.3 Experimental physics in two dimensions 1.4 The algebraic approach: Heisenberg quantization 1.5 More general quantizations 2 The configuration space 2.1 The Euclidean relative space for two particles 2.2 Dimensions d=1,2,3 2.3 Homotopy 2.4 The braid group 3 Schroedinger quantization in one dimension 4 Heisenberg quantization in one dimension 4.1 The coordinate representation 5 Schroedinger quantization in dimension d ≥ 2 5.1 Scalar wave functions 5.2 Homotopy 5.3 Interchange phases 5.4 The statistics vector potential 5.5 The N-particle case 5.6 Chern-Simons theory 6 The Feynman path integral for anyons 6.1 Eigenstates for position and momentum 6.2 The path integral 6.3 Conjugation classes in SN 6.4 The non-interacting case 6.5 Duality of Feynman and Schroedinger quantization 7 The harmonic oscillator 7.1 The two-dimensional harmonic oscillator 7.2 Two anyons in a harmonic oscillator potential 7.3 More than two anyons 7.4 The three-anyon problem 8 The anyon gas 8.1 The cluster and virial expansions 8.2 First and second order perturbative results 8.3 Regularization by periodic boundary conditions 8.4 Regularization by a harmonic oscillator potential 8.5 Bosons and fermions 8.6 Two anyons 8.7 Three anyons 8.8 The Monte Carlo method 8.9 The path integral representation of the coefficients GP 8.10 Exact and approximate polynomials 8.11 The fourth virial coefficient of anyons 8.12 Two polynomial theorems 9 Charged particles in a constant magnetic field 9.1 One particle in a magnetic field 9.2 Two anyons in a magnetic field 9.3 The anyon gas in a magnetic field 10 Interchange phases and geometric phases 10.1 Introduction to geometric phases 10.2 One particle in a magnetic field 10.3 Two particles in a magnetic field 10.4 Interchange of two anyons in potential wells 10.5 Laughlin's theory of the fractional quantum Hall effect

  2. FORMAL UNCERTAINTY ANALYSIS OF A LAGRANGIAN PHOTOCHEMICAL AIR POLLUTION MODEL. (R824792)

    EPA Science Inventory

    This study applied Monte Carlo analysis with Latin
    hypercube sampling to evaluate the effects of uncertainty
    in air parcel trajectory paths, emissions, rate constants,
    deposition affinities, mixing heights, and atmospheric stability
    on predictions from a vertically...

  3. A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.

    PubMed

    Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E

    2018-06-20

    Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Which way and how far? Tracking of translation and rotation information for human path integration.

    PubMed

    Chrastil, Elizabeth R; Sherrill, Katherine R; Hasselmo, Michael E; Stern, Chantal E

    2016-10-01

    Path integration, the constant updating of the navigator's knowledge of position and orientation during movement, requires both visuospatial knowledge and memory. This study aimed to develop a systems-level understanding of human path integration by examining the basic building blocks of path integration in humans. To achieve this goal, we used functional imaging to examine the neural mechanisms that support the tracking and memory of translational and rotational components of human path integration. Critically, and in contrast to previous studies, we examined movement in translation and rotation tasks with no defined end-point or goal. Navigators accumulated translational and rotational information during virtual self-motion. Activity in hippocampus, retrosplenial cortex (RSC), and parahippocampal cortex (PHC) increased during both translation and rotation encoding, suggesting that these regions track self-motion information during path integration. These results address current questions regarding distance coding in the human brain. By implementing a modified delayed match to sample paradigm, we also examined the encoding and maintenance of path integration signals in working memory. Hippocampus, PHC, and RSC were recruited during successful encoding and maintenance of path integration information, with RSC selective for tasks that required processing heading rotation changes. These data indicate distinct working memory mechanisms for translation and rotation, which are essential for updating neural representations of current location. The results provide evidence that hippocampus, PHC, and RSC flexibly track task-relevant translation and rotation signals for path integration and could form the hub of a more distributed network supporting spatial navigation. Hum Brain Mapp 37:3636-3655, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  5. Perfect discretization of reparametrization invariant path integrals

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin; Dittrich, Bianca; Steinhaus, Sebastian

    2011-05-01

    To obtain a well-defined path integral one often employs discretizations. In the case of gravity and reparametrization-invariant systems, the latter of which we consider here as a toy example, discretizations generically break diffeomorphism and reparametrization symmetry, respectively. This has severe implications, as these symmetries determine the dynamics of the corresponding system. Indeed we will show that a discretized path integral with reparametrization-invariance is necessarily also discretization independent and therefore uniquely determined by the corresponding continuum quantum mechanical propagator. We use this insight to develop an iterative method for constructing such a discretized path integral, akin to a Wilsonian RG flow. This allows us to address the problem of discretization ambiguities and of an anomaly-free path integral measure for such systems. The latter is needed to obtain a path integral, that can act as a projector onto the physical states, satisfying the quantum constraints. We will comment on implications for discrete quantum gravity models, such as spin foams.

  6. Distinct roles of hippocampus and medial prefrontal cortex in spatial and nonspatial memory.

    PubMed

    Sapiurka, Maya; Squire, Larry R; Clark, Robert E

    2016-12-01

    In earlier work, patients with hippocampal damage successfully path integrated, apparently by maintaining spatial information in working memory. In contrast, rats with hippocampal damage were unable to path integrate, even when the paths were simple and working memory might have been expected to support performance. We considered possible ways to understand these findings. We tested rats with either hippocampal lesions or lesions of medial prefrontal cortex (mPFC) on three tasks of spatial or nonspatial memory: path integration, spatial alternation, and a nonspatial alternation task. Rats with mPFC lesions were impaired on both spatial and nonspatial alternation but performed normally on path integration. By contrast, rats with hippocampal lesions were impaired on path integration and spatial alternation but performed normally on nonspatial alternation. We propose that rodent neocortex is limited in its ability to construct a coherent spatial working memory of complex environments. Accordingly, in tasks such as path integration and spatial alternation, working memory cannot depend on neocortex alone. Rats may accomplish many spatial memory tasks by relying on long-term memory. Alternatively, they may accomplish these tasks within working memory through sustained coordination between hippocampus and other cortical brain regions such as mPFC, in the case of spatial alternation, or parietal cortex in the case of path integration. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Medial temporal lobe roles in human path integration.

    PubMed

    Yamamoto, Naohide; Philbeck, John W; Woods, Adam J; Gajewski, Daniel A; Arthur, Joeanna C; Potolicchio, Samuel J; Levy, Lucien; Caputy, Anthony J

    2014-01-01

    Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed.

  8. Medial Temporal Lobe Roles in Human Path Integration

    PubMed Central

    Yamamoto, Naohide; Philbeck, John W.; Woods, Adam J.; Gajewski, Daniel A.; Arthur, Joeanna C.; Potolicchio, Samuel J.; Levy, Lucien; Caputy, Anthony J.

    2014-01-01

    Path integration is a process in which observers derive their location by integrating self-motion signals along their locomotion trajectory. Although the medial temporal lobe (MTL) is thought to take part in path integration, the scope of its role for path integration remains unclear. To address this issue, we administered a variety of tasks involving path integration and other related processes to a group of neurosurgical patients whose MTL was unilaterally resected as therapy for epilepsy. These patients were unimpaired relative to neurologically intact controls in many tasks that required integration of various kinds of sensory self-motion information. However, the same patients (especially those who had lesions in the right hemisphere) walked farther than the controls when attempting to walk without vision to a previewed target. Importantly, this task was unique in our test battery in that it allowed participants to form a mental representation of the target location and anticipate their upcoming walking trajectory before they began moving. Thus, these results put forth a new idea that the role of MTL structures for human path integration may stem from their participation in predicting the consequences of one's locomotor actions. The strengths of this new theoretical viewpoint are discussed. PMID:24802000

  9. Rarefaction effects in gas flows over curved surfaces

    NASA Astrophysics Data System (ADS)

    Dongari, Nishanth; White, Craig; Scanlon, Thomas J.; Zhang, Yonghao; Reese, Jason M.

    2012-11-01

    The fundamental test case of gas flow between two concentric rotating cylinders is considered in order to investigate rarefaction effects associated with the Knudsen layers over curved surfaces. We carry out direct simulation Monte Carlo simulations covering a wide range of Knudsen numbers and accommodation coefficients, and for various outer-to-inner cylinder radius ratios. Numerical data is compared with classical slip flow theory and a new power-law (PL) wall scaling model. The PL model incorporates Knudsen layer effects in near-wall regions by taking into account the boundary limiting effects on the molecular free paths. The limitations of both theoretical models are explored with respect to rarefaction and curvature effects. Torque and velocity profile comparisons also convey that mere prediction of integral flow parameters does not guarantee the accuracy of a theoretical model, and that it is important to ensure that prediction of the local flowfield is in agreement with simulation data.

  10. Efimov-driven phase transitions of the unitary Bose gas.

    PubMed

    Piatecki, Swann; Krauth, Werner

    2014-03-20

    Initially predicted in nuclear physics, Efimov trimers are bound configurations of three quantum particles that fall apart when any one of them is removed. They open a window into a rich quantum world that has become the focus of intense experimental and theoretical research, as the region of 'unitary' interactions, where Efimov trimers form, is now accessible in cold-atom experiments. Here we use a path-integral Monte Carlo algorithm backed up by theoretical arguments to show that unitary bosons undergo a first-order phase transition from a normal gas to a superfluid Efimov liquid, bound by the same effects as Efimov trimers. A triple point separates these two phases and another superfluid phase, the conventional Bose-Einstein condensate, whose coexistence line with the Efimov liquid ends in a critical point. We discuss the prospects of observing the proposed phase transitions in cold-atom systems.

  11. Statistical mechanics of neocortical interactions: Path-integral evolution of short-term memory

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1994-05-01

    Previous papers in this series of statistical mechanics of neocortical interactions (SMNI) have detailed a development from the relatively microscopic scales of neurons up to the macroscopic scales as recorded by electroencephalography (EEG), requiring an intermediate mesocolumnar scale to be developed at the scale of minicolumns (~=102 neurons) and macrocolumns (~=105 neurons). Opportunity was taken to view SMNI as sets of statistical constraints, not necessarily describing specific synaptic or neuronal mechanisms, on neuronal interactions, on some aspects of short-term memory (STM), e.g., its capacity, stability, and duration. A recently developed c-language code, pathint, provides a non-Monte Carlo technique for calculating the dynamic evolution of arbitrary-dimension (subject to computer resources) nonlinear Lagrangians, such as derived for the two-variable SMNI problem. Here, pathint is used to explicitly detail the evolution of the SMNI constraints on STM.

  12. Design and analysis on fume exhaust system of blackbody cavity sensor for continuously measuring molten steel temperature

    NASA Astrophysics Data System (ADS)

    Mei, Guohui; Zhang, Jiu; Zhao, Shumao; Xie, Zhi

    2017-03-01

    Fume exhaust system is the main component of the novel blackbody cavity sensor with a single layer tube, which removes the fume by gas flow along the exhaust pipe to keep the light path clean. However, the gas flow may break the conditions of blackbody cavity and results in the poor measurement accuracy. In this paper, we analyzed the influence of the gas flow on the temperature distribution of the measuring cavity, and then calculated the integrated effective emissivity of the non-isothermal cavity based on Monte-Carlo method, accordingly evaluated the sensor measurement accuracy, finally obtained the maximum allowable flow rate for various length of the exhaust pipe to meet the measurement accuracy. These results will help optimize the novel blackbody cavity sensor design and use it better for measuring the temperature of molten steel.

  13. Dislocation Structure and Mobility in hcp He 4

    DOE PAGES

    Landinez Borda, Edgar Josue; Cai, Wei; de Koning, Maurice

    2016-07-20

    We assess the core structure and mobility of the screw and edge basal-plane dislocations in hcp 4He using path-integral Monte Carlo simulations. Our findings provide key insights into recent interpretations of giant plasticity and mass flow junction experiments. First, both dislocations are dissociated into nonsuperfluid Shockley partial dislocations separated by ribbons of stacking fault, suggesting that they are unlikely to act as one-dimensional channels that may display Lüttinger-liquid-like behavior. Second, the centroid positions of the partial cores are found to fluctuate substantially, even in the absence of applied shear stresses. This implies that the lattice resistance to motion of themore » partial dislocations is negligible, consistent with the recent experimental observations of giant plasticity. Our results indicate that both the structure of the partial cores and the zero-point fluctuations play a role in this extreme mobility.« less

  14. Quantum phases of dipolar soft-core bosons

    NASA Astrophysics Data System (ADS)

    Grimmer, D.; Safavi-Naini, A.; Capogrosso-Sansone, B.; Söyler, Ş. G.

    2014-10-01

    We study the phase diagram of a system of soft-core dipolar bosons confined to a two-dimensional optical lattice layer. We assume that dipoles are aligned perpendicular to the layer such that the dipolar interactions are purely repulsive and isotropic. We consider the full dipolar interaction and perform path-integral quantum Monte Carlo simulations using the worm algorithm. Besides a superfluid phase, we find various solid and supersolid phases. We show that, unlike what was found previously for the case of nearest-neighbor interaction, supersolid phases are stabilized by doping the solids not only with particles but with holes as well. We further study the stability of these quantum phases against thermal fluctuations. Finally, we discuss pair formation and the stability of the pair checkerboard phase formed in a bilayer geometry, and we suggest experimental conditions under which the pair checkerboard phase can be observed.

  15. Uniform quantized electron gas

    NASA Astrophysics Data System (ADS)

    Høye, Johan S.; Lomba, Enrique

    2016-10-01

    In this work we study the correlation energy of the quantized electron gas of uniform density at temperature T  =  0. To do so we utilize methods from classical statistical mechanics. The basis for this is the Feynman path integral for the partition function of quantized systems. With this representation the quantum mechanical problem can be interpreted as, and is equivalent to, a classical polymer problem in four dimensions where the fourth dimension is imaginary time. Thus methods, results, and properties obtained in the statistical mechanics of classical fluids can be utilized. From this viewpoint we recover the well known RPA (random phase approximation). Then to improve it we modify the RPA by requiring the corresponding correlation function to be such that electrons with equal spins can not be on the same position. Numerical evaluations are compared with well known results of a standard parameterization of Monte Carlo correlation energies.

  16. Quantum and classical ripples in graphene

    NASA Astrophysics Data System (ADS)

    Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman

    2018-04-01

    Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.

  17. Tan's contact and the phase distribution of repulsive Fermi gases: Insights from quantum chromodynamics noise analyses

    NASA Astrophysics Data System (ADS)

    Porter, William J.; Drut, Joaquín E.

    2017-05-01

    Path-integral analyses originally pioneered in the study of the complex-phase problem afflicting lattice calculations of finite-density quantum chromodynamics are generalized to nonrelativistic Fermi gases with repulsive interactions. Using arguments similar to those previously applied to relativistic theories, we show that the analogous problem in nonrelativistic systems manifests itself naturally in Tan's contact as a nontrivial cancellation between terms with varied dependence on extensive thermodynamic quantities. We analyze that case under the assumption of a Gaussian phase distribution, which is supported by our Monte Carlo calculations and perturbative considerations. We further generalize these results to observables other than the contact, as well as to polarized systems and systems with fixed particle number. Our results are quite general in that they apply to repulsive multicomponent fermions, they are independent of dimensionality or trapping potential, and they hold in the ground state as well as at finite temperature.

  18. Path integrals and the WKB approximation in loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Ashtekar, Abhay; Campiglia, Miguel; Henderson, Adam

    2010-12-01

    We follow the Feynman procedure to obtain a path integral formulation of loop quantum cosmology starting from the Hilbert space framework. Quantum geometry effects modify the weight associated with each path so that the effective measure on the space of paths is different from that used in the Wheeler-DeWitt theory. These differences introduce some conceptual subtleties in arriving at the WKB approximation. But the approximation is well defined and provides intuition for the differences between loop quantum cosmology and the Wheeler-DeWitt theory from a path integral perspective.

  19. Perfect discretization of path integrals

    NASA Astrophysics Data System (ADS)

    Steinhaus, Sebastian

    2012-05-01

    In order to obtain a well-defined path integral one often employs discretizations. In the case of General Relativity these generically break diffeomorphism symmetry, which has severe consequences since these symmetries determine the dynamics of the corresponding system. In this article we consider the path integral of reparametrization invariant systems as a toy example and present an improvement procedure for the discretized propagator. Fixed points and convergence of the procedure are discussed. Furthermore we show that a reparametrization invariant path integral implies discretization independence and acts as a projector onto physical states.

  20. Master equations and the theory of stochastic path integrals

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  1. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  2. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  3. Momentum Distribution as a Fingerprint of Quantum Delocalization in Enzymatic Reactions: Open-Chain Path-Integral Simulations of Model Systems and the Hydride Transfer in Dihydrofolate Reductase.

    PubMed

    Engel, Hamutal; Doron, Dvir; Kohen, Amnon; Major, Dan Thomas

    2012-04-10

    The inclusion of nuclear quantum effects such as zero-point energy and tunneling is of great importance in studying condensed phase chemical reactions involving the transfer of protons, hydrogen atoms, and hydride ions. In the current work, we derive an efficient quantum simulation approach for the computation of the momentum distribution in condensed phase chemical reactions. The method is based on a quantum-classical approach wherein quantum and classical simulations are performed separately. The classical simulations use standard sampling techniques, whereas the quantum simulations employ an open polymer chain path integral formulation which is computed using an efficient Monte Carlo staging algorithm. The approach is validated by applying it to a one-dimensional harmonic oscillator and symmetric double-well potential. Subsequently, the method is applied to the dihydrofolate reductase (DHFR) catalyzed reduction of 7,8-dihydrofolate by nicotinamide adenine dinucleotide phosphate hydride (NADPH) to yield S-5,6,7,8-tetrahydrofolate and NADP(+). The key chemical step in the catalytic cycle of DHFR involves a stereospecific hydride transfer. In order to estimate the amount of quantum delocalization, we compute the position and momentum distributions for the transferring hydride ion in the reactant state (RS) and transition state (TS) using a recently developed hybrid semiempirical quantum mechanics-molecular mechanics potential energy surface. Additionally, we examine the effect of compression of the donor-acceptor distance (DAD) in the TS on the momentum distribution. The present results suggest differential quantum delocalization in the RS and TS, as well as reduced tunneling upon DAD compression.

  4. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  5. The path dependency theory: analytical framework to study institutional integration. The case of France.

    PubMed

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-06-30

    The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France.

  6. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  7. Epidemic extinction paths in complex networks

    NASA Astrophysics Data System (ADS)

    Hindes, Jason; Schwartz, Ira B.

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  8. Epidemic extinction paths in complex networks.

    PubMed

    Hindes, Jason; Schwartz, Ira B

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  9. On the optical path length in refracting media

    NASA Astrophysics Data System (ADS)

    Hasbun, Javier E.

    2018-04-01

    The path light follows as it travels through a substance depends on the substance's index of refraction. This path is commonly known as the optical path length (OPL). In geometrical optics, the laws of reflection and refraction are simple examples for understanding the path of light travel from source to detector for constant values of the traveled substances' refraction indices. In more complicated situations, the Euler equation can be quite useful and quite important in optics courses. Here, the well-known Euler differential equation (EDE) is used to obtain the OPL for several index of refraction models. For pedagogical completeness, the OPL is also obtained through a modified Monte Carlo (MC) method, versus which the various results obtained through the EDE are compared. The examples developed should be important in projects involving undergraduate as well as graduate students in an introductory optics course. A simple matlab script (program) is included that can be modified by students who wish to pursue the subject further.

  10. Two-path plasmonic interferometer with integrated detector

    DOEpatents

    Dyer, Gregory Conrad; Shaner, Eric A.; Aizin, Gregory

    2016-03-29

    An electrically tunable terahertz two-path plasmonic interferometer with an integrated detection element can down convert a terahertz field to a rectified DC signal. The integrated detector utilizes a resonant plasmonic homodyne mixing mechanism that measures the component of the plasma waves in-phase with an excitation field that functions as the local oscillator in the mixer. The plasmonic interferometer comprises two independently tuned electrical paths. The plasmonic interferometer enables a spectrometer-on-a-chip where the tuning of electrical path length plays an analogous role to that of physical path length in macroscopic Fourier transform interferometers.

  11. Amplitude and Phase Characteristics of Signals at the Output of Spatially Separated Antennas for Paths with Scattering

    NASA Astrophysics Data System (ADS)

    Anikin, A. S.

    2018-06-01

    Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.

  12. The path integral on the pseudosphere

    NASA Astrophysics Data System (ADS)

    Grosche, C.; Steiner, F.

    1988-02-01

    A rigorous path integral treatment for the d-dimensional pseudosphere Λd-1 , a Riemannian manifold of constant negative curvature, is presented. The path integral formulation is based on a canonical approach using Weyl-ordering and the Hamiltonian path integral defined on midpoints. The time-dependent and energy-dependent Feynman kernels obtain different expressions in the even- and odd-dimensional cases, respectively. The special case of the three-dimensional pseudosphere, which is analytically equivalent to the Poincaré upper half plane, the Poincaré disc, and the hyperbolic strip, is discussed in detail including the energy spectrum and the normalised wave-functions.

  13. Formulation of D-brane Dynamics

    NASA Astrophysics Data System (ADS)

    Evans, Thomas

    2012-03-01

    It is the purpose of this paper (within the context of STS rules & guidelines ``research report'') to formulate a statistical-mechanical form of D-brane dynamics. We consider first the path integral formulation of quantum mechanics, and extend this to a path-integral formulation of D-brane mechanics, summing over all the possible path integral sectors of R-R, NS charged states. We then investigate this generalization utilizing a path-integral formulation summing over all the possible path integral sectors of R-R charged states, calculated from the mean probability tree-level amplitude of type I, IIA, and IIB strings, serving as a generalization of all strings described by D-branes. We utilize this generalization to study black holes in regimes where the initial D-brane system is legitimate, and further this generalization to look at information loss near regions of nonlocality on a non-ordinary event horizon. We see here that in these specific regimes, we can calculate a path integral formulation, as describing D0-brane mechanics, tracing the dissipation of entropy throughout the event horizon. This is used to study the information paradox, and to propose a resolution between the phenomena and the correct and expected quantum mechanical description. This is done as our path integral throughout entropy entering the event horizon effectively and correctly encodes the initial state in subtle correlations in the Hawking radiation.

  14. All-Optical Wavelength-Path Service With Quality Assurance by Multilayer Integration System

    NASA Astrophysics Data System (ADS)

    Yagi, Mikio; Tanaka, Shinya; Satomi, Shuichi; Ryu, Shiro; Asano, Shoichiro

    2006-09-01

    In the future all-optical network controlled by generalized multiprotocol label switching (GMPLS), the wavelength path between end nodes will change dynamically. This inevitably means that the fiber parameters along the wavelength path will also vary. This variation in fiber parameters influences the signal quality of high-speed-transmission system (bit rates over 40 Gb/s). Therefore, at a path setup, the fiber-parameter effect should be adequately compensated. Moreover, the path setup must be completed fast enough to meet the network-application demands. To realize the rapid setup of adequate paths, a multilayer integration system for all-optical wavelength-path quality assurance is proposed. This multilayer integration system is evaluated in a field trial. In the trial, the GMPLS control plane, measurement plane, and data plane coordinated to maintain the quality of a 40-Gb/s wavelength path that would otherwise be degraded by the influence of chromatic dispersion. It is also demonstrated that the multilayer integration system can assure the signal quality in the face of not only chromatic dispersion but also degradation in the optical signal-to-noise ratio by the use of a 2R regeneration system. Our experiments confirm that the proposed multilayer integration system is an essential part of future all-optical networks.

  15. A Monte Carlo Application to Approximate the Integral from a to b of e Raised to the x Squared.

    ERIC Educational Resources Information Center

    Easterday, Kenneth; Smith, Tommy

    1992-01-01

    Proposes an alternative means of approximating the value of complex integrals, the Monte Carlo procedure. Incorporating a discrete approach and probability, an approximation is obtained from the ratio of computer-generated points falling under the curve to the number of points generated in a predetermined rectangle. (MDH)

  16. Path Integral Computation of Quantum Free Energy Differences Due to Alchemical Transformations Involving Mass and Potential.

    PubMed

    Pérez, Alejandro; von Lilienfeld, O Anatole

    2011-08-09

    Thermodynamic integration, perturbation theory, and λ-dynamics methods were applied to path integral molecular dynamics calculations to investigate free energy differences due to "alchemical" transformations. Several estimators were formulated to compute free energy differences in solvable model systems undergoing changes in mass and/or potential. Linear and nonlinear alchemical interpolations were used for the thermodynamic integration. We find improved convergence for the virial estimators, as well as for the thermodynamic integration over nonlinear interpolation paths. Numerical results for the perturbative treatment of changes in mass and electric field strength in model systems are presented. We used thermodynamic integration in ab initio path integral molecular dynamics to compute the quantum free energy difference of the isotope transformation in the Zundel cation. The performance of different free energy methods is discussed.

  17. Path integrals, supersymmetric quantum mechanics, and the Atiyah-Singer index theorem for twisted Dirac

    NASA Astrophysics Data System (ADS)

    Fine, Dana S.; Sawin, Stephen

    2017-01-01

    Feynman's time-slicing construction approximates the path integral by a product, determined by a partition of a finite time interval, of approximate propagators. This paper formulates general conditions to impose on a short-time approximation to the propagator in a general class of imaginary-time quantum mechanics on a Riemannian manifold which ensure that these products converge. The limit defines a path integral which agrees pointwise with the heat kernel for a generalized Laplacian. The result is a rigorous construction of the propagator for supersymmetric quantum mechanics, with potential, as a path integral. Further, the class of Laplacians includes the square of the twisted Dirac operator, which corresponds to an extension of N = 1/2 supersymmetric quantum mechanics. General results on the rate of convergence of the approximate path integrals suffice in this case to derive the local version of the Atiyah-Singer index theorem.

  18. The path integral on the Poincaré upper half-plane with a magnetic field and for the Morse potential

    NASA Astrophysics Data System (ADS)

    Grosche, Christian

    1988-10-01

    Rigorous path integral treatments on the Poincaré upper half-plane with a magnetic field and for the Morse potential are presented. The calculation starts with the path integral on the Poincaré upper half-plane with a magnetic field. By a Fourier expansion and a non-linear transformation this problem is reformulated in terms of the path integral for the Morse potential. This latter problem can be reduced by an appropriate space-time transformation to the path integral for the harmonic oscillator with generalised angular momentum, a technique which has been developed in recent years. The well-known solution for the last problem enables one to give explicit expressions for the Feynman kernels for the Morse potential and for the Poincaré upper half-plane with magnetic field, respectively. The wavefunctions and the energy spectrum for the bound and scattering states are given, respectively.

  19. Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams

    NASA Astrophysics Data System (ADS)

    Willow, Soohaeng Yoo; Hirata, So

    2014-01-01

    A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.

  20. The path dependency theory: analytical framework to study institutional integration. The case of France

    PubMed Central

    Trouvé, Hélène; Couturier, Yves; Etheridge, Francis; Saint-Jean, Olivier; Somme, Dominique

    2010-01-01

    Background The literature on integration indicates the need for an enhanced theorization of institutional integration. This article proposes path dependence as an analytical framework to study the systems in which integration takes place. Purpose PRISMA proposes a model for integrating health and social care services for older adults. This model was initially tested in Quebec. The PRISMA France study gave us an opportunity to analyze institutional integration in France. Methods A qualitative approach was used. Analyses were based on semi-structured interviews with actors of all levels of decision-making, observations of advisory board meetings, and administrative documents. Results Our analyses revealed the complexity and fragmentation of institutional integration. The path dependency theory, which analyzes the change capacity of institutions by taking into account their historic structures, allows analysis of this situation. The path dependency to the Bismarckian system and the incomplete reforms of gerontological policies generate the coexistence and juxtaposition of institutional systems. In such a context, no institution has sufficient ability to determine gerontology policy and build institutional integration by itself. Conclusion Using path dependence as an analytical framework helps to understand the reasons why institutional integration is critical to organizational and clinical integration, and the complex construction of institutional integration in France. PMID:20689740

  1. Direct simulation of high-vorticity gas flows

    NASA Technical Reports Server (NTRS)

    Bird, G. A.

    1987-01-01

    The computational limitations associated with the molecular dynamics (MD) method and the direct simulation Monte Carlo (DSMC) method are reviewed in the context of the computation of dilute gas flows with high vorticity. It is concluded that the MD method is generally limited to the dense gas case in which the molecular diameter is one-tenth or more of the mean free path. It is shown that the cell size in DSMC calculations should be small in comparison with the mean free path, and that this may be facilitated by a new subcell procedure for the selection of collision partners.

  2. Atomic kinetic energy, momentum distribution, and structure of solid neon at zero temperature

    NASA Astrophysics Data System (ADS)

    Cazorla, C.; Boronat, J.

    2008-01-01

    We report on the calculation of the ground-state atomic kinetic energy Ek and momentum distribution of solid Ne by means of the diffusion Monte Carlo method and Aziz HFD-B pair potential. This approach is shown to perform notably for this crystal since we obtain very good agreement with respect to experimental thermodynamic data. Additionally, we study the structural properties of solid Ne at densities near the equilibrium by estimating the radial pair-distribution function, Lindemann’s ratio, and atomic density profile around the positions of the perfect crystalline lattice. Our value for Ek at the equilibrium density is 41.51(6)K , which agrees perfectly with the recent prediction made by Timms , 41(2)K , based on their deep-inelastic neutron scattering experiments carried out over the temperature range 4-20K , and also with previous path integral Monte Carlo results obtained with the Lennard-Jones and Aziz HFD-C2 atomic pairwise interactions. The one-body density function of solid Ne is calculated accurately and found to fit perfectly, within statistical uncertainty, to a Gaussian curve. Furthermore, we analyze the degree of anharmonicity of solid Ne by calculating some of its microscopic ground-state properties within traditional harmonic approaches. We provide insightful comparison to solid He4 in terms of the Debye model in order to assess the relevance of anharmonic effects in Ne.

  3. Effective optimization using sample persistence: A case study on quantum annealers and various Monte Carlo optimization methods

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili; Katzgraber, Helmut G.

    2017-10-01

    We present and apply a general-purpose, multistart algorithm for improving the performance of low-energy samplers used for solving optimization problems. The algorithm iteratively fixes the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are smaller and less connected, and samplers tend to give better low-energy samples for these problems. The algorithm is trivially parallelizable since each start in the multistart algorithm is independent, and could be applied to any heuristic solver that can be run multiple times to give a sample. We present results for several classes of hard problems solved using simulated annealing, path-integral quantum Monte Carlo, parallel tempering with isoenergetic cluster moves, and a quantum annealer, and show that the success metrics and the scaling are improved substantially. When combined with this algorithm, the quantum annealer's scaling was substantially improved for native Chimera graph problems. In addition, with this algorithm the scaling of the time to solution of the quantum annealer is comparable to the Hamze-de Freitas-Selby algorithm on the weak-strong cluster problems introduced by Boixo et al. Parallel tempering with isoenergetic cluster moves was able to consistently solve three-dimensional spin glass problems with 8000 variables when combined with our method, whereas without our method it could not solve any.

  4. Monte Carlo simulation of collisionless shocks showing preferential acceleration of high A/Z particles. [in cosmic rays

    NASA Technical Reports Server (NTRS)

    Ellison, D. C.; Jones, F. C.; Eichler, D.

    1981-01-01

    A collisionless quasi-parallel shock is simulated by Monte Carlo techniques. The scattering of all velocity particles from thermal to high energy is assumed to occur so that the mean free path is directly proportional to velocity times the mass-to-charge-ratio, and inversely proporational to the plasma density. The shock profile and velocity spectra are obtained, showing preferential acceleration of high A/Z particles relative to protons. The inclusion of the back pressure of the scattering particles on the inflowing plasma produces a smoothing of the shock profile, which implies that the spectra are steeper than for a discontinuous shock.

  5. Mnemonic discrimination relates to perforant path integrity: An ultra-high resolution diffusion tensor imaging study.

    PubMed

    Bennett, Ilana J; Stark, Craig E L

    2016-03-01

    Pattern separation describes the orthogonalization of similar inputs into unique, non-overlapping representations. This computational process is thought to serve memory by reducing interference and to be mediated by the dentate gyrus of the hippocampus. Using ultra-high in-plane resolution diffusion tensor imaging (hrDTI) in older adults, we previously demonstrated that integrity of the perforant path, which provides input to the dentate gyrus from entorhinal cortex, was associated with mnemonic discrimination, a behavioral outcome designed to load on pattern separation. The current hrDTI study assessed the specificity of this perforant path integrity-mnemonic discrimination relationship relative to other cognitive constructs (identified using a factor analysis) and white matter tracts (hippocampal cingulum, fornix, corpus callosum) in 112 healthy adults (20-87 years). Results revealed age-related declines in integrity of the perforant path and other medial temporal lobe (MTL) tracts (hippocampal cingulum, fornix). Controlling for global effects of brain aging, perforant path integrity related only to the factor that captured mnemonic discrimination performance. Comparable integrity-mnemonic discrimination relationships were also observed for the hippocampal cingulum and fornix. Thus, whereas perforant path integrity specifically relates to mnemonic discrimination, mnemonic discrimination may be mediated by a broader MTL network. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Kinetic Monte Carlo Simulation of Oxygen Diffusion in Ytterbium Disilicate

    NASA Technical Reports Server (NTRS)

    Good, Brian S.

    2015-01-01

    Ytterbium disilicate is of interest as a potential environmental barrier coating for aerospace applications, notably for use in next generation jet turbine engines. In such applications, the transport of oxygen and water vapor through these coatings to the ceramic substrate is undesirable if high temperature oxidation is to be avoided. In an effort to understand the diffusion process in these materials, we have performed kinetic Monte Carlo simulations of vacancy-mediated and interstitial oxygen diffusion in Ytterbium disilicate. Oxygen vacancy and interstitial site energies, vacancy and interstitial formation energies, and migration barrier energies were computed using Density Functional Theory. We have found that, in the case of vacancy-mediated diffusion, many potential diffusion paths involve large barrier energies, but some paths have barrier energies smaller than one electron volt. However, computed vacancy formation energies suggest that the intrinsic vacancy concentration is small. In the case of interstitial diffusion, migration barrier energies are typically around one electron volt, but the interstitial defect formation energies are positive, with the result that the disilicate is unlikely to exhibit experience significant oxygen permeability except at very high temperature.

  7. Cross talk in the Lambert-Beer calculation for near-infrared wavelengths estimated by Monte Carlo simulations.

    PubMed

    Uludag, K; Kohl, M; Steinbrink, J; Obrig, H; Villringer, A

    2002-01-01

    Using the modified Lambert-Beer law to analyze attenuation changes measured noninvasively during functional activation of the brain might result in an insufficient separation of chromophore changes ("cross talk") due to the wavelength dependence of the partial path length of photons in the activated volume of the head. The partial path length was estimated by performing Monte Carlo simulations on layered head models. When assuming cortical activation (e.g., in the depth of 8-12 mm), we determine negligible cross talk when considering changes in oxygenated and deoxygenated hemoglobin. But additionally taking changes in the redox state of cytochrome-c-oxidase into account, this analysis results in significant artifacts. An analysis developed for changes in mean time of flight--instead of changes in attenuation--reduces the cross talk for the layers of cortical activation. These results were validated for different oxygen saturations, wavelength combinations and scattering coefficients. For the analysis of changes in oxygenated and deoxygenated hemoglobin only, low cross talk was also found when the activated volume was assumed to be a 4-mm-diam sphere.

  8. Spectral deconvolution and operational use of stripping ratios in airborne radiometrics.

    PubMed

    Allyson, J D; Sanderson, D C

    2001-01-01

    Spectral deconvolution using stripping ratios for a set of pre-defined energy windows is the simplest means of reducing the most important part of gamma-ray spectral information. In this way, the effective interferences between the measured peaks are removed, leading, through a calibration, to clear estimates of radionuclide inventory. While laboratory measurements of stripping ratios are relatively easy to acquire, with detectors placed above small-scale calibration pads of known radionuclide concentrations, the extrapolation to measurements at altitudes where airborne survey detectors are used bring difficulties such as air-path attenuation and greater uncertainties in knowing ground level inventories. Stripping ratios are altitude dependent, and laboratory measurements using various absorbers to simulate the air-path have been used with some success. Full-scale measurements from an aircraft require a suitable location where radionuclide concentrations vary little over the field of view of the detector (which may be hundreds of metres). Monte Carlo simulations offer the potential of full-scale reproduction of gamma-ray transport and detection mechanisms. Investigations have been made to evaluate stripping ratios using experimental and Monte Carlo methods.

  9. Free energy landscape from path-sampling: application to the structural transition in LJ38

    NASA Astrophysics Data System (ADS)

    Adjanor, G.; Athènes, M.; Calvo, F.

    2006-09-01

    We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.

  10. Phonon Scattering and Confinement in Crystalline Films

    NASA Astrophysics Data System (ADS)

    Parrish, Kevin D.

    The operating temperature of energy conversion and electronic devices affects their efficiency and efficacy. In many devices, however, the reference values of the thermal properties of the materials used are no longer applicable due to processing techniques performed. This leads to challenges in thermal management and thermal engineering that demand accurate predictive tools and high fidelity measurements. The thermal conductivity of strained, nanostructured, and ultra-thin dielectrics are predicted computationally using solutions to the Boltzmann transport equation. Experimental measurements of thermal diffusivity are performed using transient grating spectroscopy. The thermal conductivities of argon, modeled using the Lennard-Jones potential, and silicon, modeled using density functional theory, are predicted under compressive and tensile strain from lattice dynamics calculations. The thermal conductivity of silicon is found to be invariant with compression, a result that is in disagreement with previous computational efforts. This difference is attributed to the more accurate force constants calculated from density functional theory. The invariance is found to be a result of competing effects of increased phonon group velocities and decreased phonon lifetimes, demonstrating how the anharmonic contribution of the atomic potential can scale differently than the harmonic contribution. Using three Monte Carlo techniques, the phonon-boundary scattering and the subsequent thermal conductivity reduction are predicted for nanoporous silicon thin films. The Monte Carlo techniques used are free path sampling, isotropic ray-tracing, and a new technique, modal ray-tracing. The thermal conductivity predictions from all three techniques are observed to be comparable to previous experimental measurements on nanoporous silicon films. The phonon mean free paths predicted from isotropic ray-tracing, however, are unphysical as compared to those predicted by free path sampling. Removing the isotropic assumption, leading to the formulation of modal ray-tracing, corrects the mean free path distribution. The effect of phonon line-of-sight is investigated in nanoporous silicon films using free path sampling. When the line-of-sight is cut off there is a distinct change in thermal conductivity versus porosity. By analyzing the free paths of an obstructed phonon mode, it is concluded that the trend change is due to a hard upper limit on the free paths that can exist due to the nanopore geometry in the material. The transient grating technique is an optical contact-less laser based experiment for measuring the in-plane thermal diffusivity of thin films and membranes. The theory of operation and physical setup of a transient grating experiment is detailed. The procedure for extracting the thermal diffusivity from the raw experimental signal is improved upon by removing arbitrary user choice in the fitting parameters used and constructing a parameterless error minimizing procedure. The thermal conductivity of ultra-thin argon films modeled with the Lennard-Jones potential is calculated from both the Monte Carlo free path sampling technique and from explicit reduced dimensionality lattice dynamics calculations. In these ultra-thin films, the phonon properties are altered in more than a perturbative manner, referred to as the confinement regime. The free path sampling technique, which is a perturbative method, is compared to a reduced dimensionality lattice dynamics calculation where the entire film thickness is taken as the unit cell. Divergence in thermal conductivity magnitude and trend is found at few unit cell thick argon films. Although the phonon group velocities and lifetimes are affected, it is found that alterations to the phonon density of states are the primary cause of the deviation in thermal conductivity in the confinement regime.

  11. Sensory feedback in a bump attractor model of path integration.

    PubMed

    Poll, Daniel B; Nguyen, Khanh; Kilpatrick, Zachary P

    2016-04-01

    Mammalian spatial navigation systems utilize several different sensory information channels. This information is converted into a neural code that represents the animal's current position in space by engaging place cell, grid cell, and head direction cell networks. In particular, sensory landmark (allothetic) cues can be utilized in concert with an animal's knowledge of its own velocity (idiothetic) cues to generate a more accurate representation of position than path integration provides on its own (Battaglia et al. The Journal of Neuroscience 24(19):4541-4550 (2004)). We develop a computational model that merges path integration with feedback from external sensory cues that provide a reliable representation of spatial position along an annular track. Starting with a continuous bump attractor model, we explore the impact of synaptic spatial asymmetry and heterogeneity, which disrupt the position code of the path integration process. We use asymptotic analysis to reduce the bump attractor model to a single scalar equation whose potential represents the impact of asymmetry and heterogeneity. Such imperfections cause errors to build up when the network performs path integration, but these errors can be corrected by an external control signal representing the effects of sensory cues. We demonstrate that there is an optimal strength and decay rate of the control signal when cues appear either periodically or randomly. A similar analysis is performed when errors in path integration arise from dynamic noise fluctuations. Again, there is an optimal strength and decay of discrete control that minimizes the path integration error.

  12. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    NASA Astrophysics Data System (ADS)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  13. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  14. Molecular dynamics and Monte Carlo simulations resolve apparent diffusion rate differences for proteins confined in nanochannels

    DOE PAGES

    Tringe, J. W.; Ileri, N.; Levie, H. W.; ...

    2015-08-01

    We use Molecular Dynamics and Monte Carlo simulations to examine molecular transport phenomena in nanochannels, explaining four orders of magnitude difference in wheat germ agglutinin (WGA) protein diffusion rates observed by fluorescence correlation spectroscopy (FCS) and by direct imaging of fluorescently-labeled proteins. We first use the ESPResSo Molecular Dynamics code to estimate the surface transport distance for neutral and charged proteins. We then employ a Monte Carlo model to calculate the paths of protein molecules on surfaces and in the bulk liquid transport medium. Our results show that the transport characteristics depend strongly on the degree of molecular surface coverage.more » Atomic force microscope characterization of surfaces exposed to WGA proteins for 1000 s show large protein aggregates consistent with the predicted coverage. These calculations and experiments provide useful insight into the details of molecular motion in confined geometries.« less

  15. Implementation of Monte Carlo Dose calculation for CyberKnife treatment planning

    NASA Astrophysics Data System (ADS)

    Ma, C.-M.; Li, J. S.; Deng, J.; Fan, J.

    2008-02-01

    Accurate dose calculation is essential to advanced stereotactic radiosurgery (SRS) and stereotactic radiotherapy (SRT) especially for treatment planning involving heterogeneous patient anatomy. This paper describes the implementation of a fast Monte Carlo dose calculation algorithm in SRS/SRT treatment planning for the CyberKnife® SRS/SRT system. A superposition Monte Carlo algorithm is developed for this application. Photon mean free paths and interaction types for different materials and energies as well as the tracks of secondary electrons are pre-simulated using the MCSIM system. Photon interaction forcing and splitting are applied to the source photons in the patient calculation and the pre-simulated electron tracks are repeated with proper corrections based on the tissue density and electron stopping powers. Electron energy is deposited along the tracks and accumulated in the simulation geometry. Scattered and bremsstrahlung photons are transported, after applying the Russian roulette technique, in the same way as the primary photons. Dose calculations are compared with full Monte Carlo simulations performed using EGS4/MCSIM and the CyberKnife treatment planning system (TPS) for lung, head & neck and liver treatments. Comparisons with full Monte Carlo simulations show excellent agreement (within 0.5%). More than 10% differences in the target dose are found between Monte Carlo simulations and the CyberKnife TPS for SRS/SRT lung treatment while negligible differences are shown in head and neck and liver for the cases investigated. The calculation time using our superposition Monte Carlo algorithm is reduced up to 62 times (46 times on average for 10 typical clinical cases) compared to full Monte Carlo simulations. SRS/SRT dose distributions calculated by simple dose algorithms may be significantly overestimated for small lung target volumes, which can be improved by accurate Monte Carlo dose calculations.

  16. A Note on Feynman Path Integral for Electromagnetic External Fields

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2017-08-01

    We propose a Fresnel stochastic white noise framework to analyze the nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under an external electromagnetic time-independent potential.

  17. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    PubMed

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. SU-F-T-125: Radial Dose Distributions From Carbon Ions of Therapeutic Energies Calculated with Geant4-DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vassiliev, O

    Purpose: Radial dose distribution D(r) is the dose as a function of lateral distance from the path of a heavy charged particle. Its main application is in modelling of biological effects of heavy ions, including applications to hadron therapy. It is the main physical parameter of a broad group of radiobiological models known as the amorphous track models. Our purpose was to calculate D(r) with Monte Carlo for carbon ions of therapeutic energies, find a simple formula for D(r) and fit it to the Monte Carlo data. Methods: All calculations were performed with Geant4-DNA code, for carbon ion energies frommore » 10 to 400 MeV/u (ranges in water: ∼ 0.4 mm to 27 cm). The spatial resolution of dose distribution in the lateral direction was 1 nm. Electron tracking cut off energy was 11 eV (ionization threshold). The maximum lateral distance considered was 10 µm. Over this distance, D(r) decreases with distance by eight orders of magnitude. Results: All calculated radial dose distributions had a similar shape dominated by the well-known inverse square dependence on the distance. Deviations from the inverse square law were observed close to the beam path (r<10 nm) and at large distances (r >1 µm). At small and large distances D(r) decreased, respectively, slower and faster than the inverse square of distance. A formula for D(r) consistent with this behavior was found and fitted to the Monte Carlo data. The accuracy of the fit was better than 10% for all distances considered. Conclusion: We have generated a set of radial dose distributions for carbon ions that covers the entire range of therapeutic energies, for distances from the ion path of up to 10 µm. The latter distance is sufficient for most applications because dose beyond 10 µm is extremely low.« less

  19. Spin coherent-state path integrals and the instanton calculus

    NASA Astrophysics Data System (ADS)

    Garg, Anupam; Kochetov, Evgueny; Park, Kee-Su; Stone, Michael

    2003-01-01

    We use an instanton approximation to the continuous-time spin coherent-state path integral to obtain the tunnel splitting of classically degenerate ground states. We show that provided the fluctuation determinant is carefully evaluated, the path integral expression is accurate to order O(1/j). We apply the method to the LMG model and to the molecular magnet Fe8 in a transverse field.

  20. Semiclassical evaluation of quantum fidelity

    NASA Astrophysics Data System (ADS)

    Vanicek, Jiri

    2004-03-01

    We present a numerically feasible semiclassical method to evaluate quantum fidelity (Loschmidt echo) in a classically chaotic system. It was thought that such evaluation would be intractable, but instead we show that a uniform semiclassical expression not only is tractable but it gives remarkably accurate numerical results for the standard map in both the Fermi-golden-rule and Lyapunov regimes. Because it allows a Monte-Carlo evaluation, this uniform expression is accurate at times where there are 10^70 semiclassical contributions. Remarkably, the method also explicitly contains the ``building blocks'' of analytical theories of recent literature, and thus permits a direct test of approximations made by other authors in these regimes, rather than an a posteriori comparison with numerical results. We explain in more detail the extended validity of the classical perturbation approximation and thus provide a ``defense" of the linear response theory from the famous Van Kampen objection. We point out the potential use of our uniform expression in other areas because it gives a most direct link between the quantum Feynman propagator based on the path integral and the semiclassical Van Vleck propagator based on the sum over classical trajectories. Finally, we test the applicability of our method in integrable and mixed systems.

  1. Geometrical Monte Carlo simulation of atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Yuksel, Demet; Yuksel, Heba

    2013-09-01

    Atmospheric turbulence has a significant impact on the quality of a laser beam propagating through the atmosphere over long distances. Turbulence causes intensity scintillation and beam wander from propagation through turbulent eddies of varying sizes and refractive index. This can severely impair the operation of target designation and Free-Space Optical (FSO) communications systems. In addition, experimenting on an FSO communication system is rather tedious and difficult. The interferences of plentiful elements affect the result and cause the experimental outcomes to have bigger error variance margins than they are supposed to have. Especially when we go into the stronger turbulence regimes the simulation and analysis of the turbulence induced beams require delicate attention. We propose a new geometrical model to assess the phase shift of a laser beam propagating through turbulence. The atmosphere along the laser beam propagation path will be modeled as a spatial distribution of spherical bubbles with refractive index discontinuity calculated from a Gaussian distribution with the mean value being the index of air. For each statistical representation of the atmosphere, the path of rays will be analyzed using geometrical optics. These Monte Carlo techniques will assess the phase shift as a summation of the phases that arrive at the same point at the receiver. Accordingly, there would be dark and bright spots at the receiver that give an idea regarding the intensity pattern without having to solve the wave equation. The Monte Carlo analysis will be compared with the predictions of wave theory.

  2. Variational nature, integration, and properties of Newton reaction path

    NASA Astrophysics Data System (ADS)

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-01

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  3. Variational nature, integration, and properties of Newton reaction path.

    PubMed

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-21

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  4. Path integration of head direction: updating a packet of neural activity at the correct speed using neuronal time constants.

    PubMed

    Walters, D M; Stringer, S M

    2010-07-01

    A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.

  5. From classical to quantum and back: Hamiltonian adaptive resolution path integral, ring polymer, and centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.

    2017-12-01

    Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.

  6. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  7. Architectural constraints are a major factor reducing path integration accuracy in the rat head direction cell system.

    PubMed

    Page, Hector J I; Walters, Daniel; Stringer, Simon M

    2015-01-01

    Head direction cells fire to signal the direction in which an animal's head is pointing. They are able to track head direction using only internally-derived information (path integration)In this simulation study we investigate the factors that affect path integration accuracy. Specifically, two major limiting factors are identified: rise time, the time after stimulation it takes for a neuron to start firing, and the presence of symmetric non-offset within-layer recurrent collateral connectivity. On the basis of the latter, the important prediction is made that head direction cell regions directly involved in path integration will not contain this type of connectivity; giving a theoretical explanation for architectural observations. Increased neuronal rise time is found to slow path integration, and the slowing effect for a given rise time is found to be more severe in the context of short conduction delays. Further work is suggested on the basis of our findings, which represent a valuable contribution to understanding of the head direction cell system.

  8. Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies

    NASA Astrophysics Data System (ADS)

    Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura

    2018-05-01

    Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.

  9. Generalized causal mediation and path analysis: Extensions and practical considerations.

    PubMed

    Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra

    2018-01-01

    Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.

  10. Analytical Applications of Monte Carlo Techniques.

    ERIC Educational Resources Information Center

    Guell, Oscar A.; Holcombe, James A.

    1990-01-01

    Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)

  11. Quantization of Simple Parametrized Systems

    NASA Astrophysics Data System (ADS)

    Ruffini, Giulio

    1995-01-01

    I study the canonical formulation and quantization of some simple parametrized systems using Dirac's formalism and the Becchi-Rouet-Stora-Tyutin (BRST) extended phase space method. These systems include the parametrized particle and minisuperspace. Using Dirac's formalism I first analyze for each case the construction of the classical reduced phase space. There are two separate features of these systems that may make this construction difficult: (a) Because of the boundary conditions used, the actions are not gauge invariant at the boundaries. (b) The constraints may have a disconnected solution space. The relativistic particle and minisuperspace have such complicated constraints, while the non-relativistic particle displays only the first feature. I first show that a change of gauge fixing is equivalent to a canonical transformation in the reduced phase space, thus resolving the problems associated with the first feature above. Then I consider the quantization of these systems using several approaches: Dirac's method, Dirac-Fock quantization, and the BRST formalism. In the cases of the relativistic particle and minisuperspace I consider first the quantization of one branch of the constraint at the time and then discuss the backgrounds in which it is possible to quantize simultaneously both branches. I motivate and define the inner product, and obtain, for example, the Klein-Gordon inner product for the relativistic case. Then I show how to construct phase space path integral representations for amplitudes in these approaches--the Batalin-Fradkin-Vilkovisky (BFV) and the Faddeev path integrals --from which one can then derive the path integrals in coordinate space--the Faddeev-Popov path integral and the geometric path integral. In particular I establish the connection between the Hilbert space representation and the range of the lapse in the path integrals. I also examine the class of paths that contribute in the path integrals and how they affect space-time covariance, concluding that it is consistent to take paths that move forward in time only when there is no electric field. The key elements in this analysis are the space-like paths and the behavior of the action under the non-trivial ( Z_2) element of the reparametrization group.

  12. 77 FR 74027 - Certain Integrated Circuit Packages Provided with Multiple Heat-Conducting Paths and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-12

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-851] Certain Integrated Circuit Packages Provided with Multiple Heat- Conducting Paths and Products Containing Same; Commission Determination Not To... provided with multiple heat-conducting paths and products containing same by reason of infringement of...

  13. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  14. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  15. The Development and Comparison of Molecular Dynamics Simulation and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Chen, Jundong

    2018-03-01

    Molecular dynamics is an integrated technology that combines physics, mathematics and chemistry. Molecular dynamics method is a computer simulation experimental method, which is a powerful tool for studying condensed matter system. This technique not only can get the trajectory of the atom, but can also observe the microscopic details of the atomic motion. By studying the numerical integration algorithm in molecular dynamics simulation, we can not only analyze the microstructure, the motion of particles and the image of macroscopic relationship between them and the material, but can also study the relationship between the interaction and the macroscopic properties more conveniently. The Monte Carlo Simulation, similar to the molecular dynamics, is a tool for studying the micro-molecular and particle nature. In this paper, the theoretical background of computer numerical simulation is introduced, and the specific methods of numerical integration are summarized, including Verlet method, Leap-frog method and Velocity Verlet method. At the same time, the method and principle of Monte Carlo Simulation are introduced. Finally, similarities and differences of Monte Carlo Simulation and the molecular dynamics simulation are discussed.

  16. Spatial Updating Strategy Affects the Reference Frame in Path Integration.

    PubMed

    He, Qiliang; McNamara, Timothy P

    2018-06-01

    This study investigated how spatial updating strategies affected the selection of reference frames in path integration. Participants walked an outbound path consisting of three successive waypoints in a featureless environment and then pointed to the first waypoint. We manipulated the alignment of participants' final heading at the end of the outbound path with their initial heading to examine the adopted reference frame. We assumed that the initial heading defined the principal reference direction in an allocentric reference frame. In Experiment 1, participants were instructed to use a configural updating strategy and to monitor the shape of the outbound path while they walked it. Pointing performance was best when the final heading was aligned with the initial heading, indicating the use of an allocentric reference frame. In Experiment 2, participants were instructed to use a continuous updating strategy and to keep track of the location of the first waypoint while walking the outbound path. Pointing performance was equivalent regardless of the alignment between the final and the initial headings, indicating the use of an egocentric reference frame. These results confirmed that people could employ different spatial updating strategies in path integration (Wiener, Berthoz, & Wolbers Experimental Brain Research 208(1) 61-71, 2011), and suggested that these strategies could affect the selection of the reference frame for path integration.

  17. Integration within the Felsenstein equation for improved Markov chain Monte Carlo methods in population genetics

    PubMed Central

    Hey, Jody; Nielsen, Rasmus

    2007-01-01

    In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided. PMID:17301231

  18. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  19. Hedged Monte-Carlo: low variance derivative pricing with objective probabilities

    NASA Astrophysics Data System (ADS)

    Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan

    2001-01-01

    We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.

  20. High bandwidth underwater optical communication.

    PubMed

    Hanson, Frank; Radic, Stojan

    2008-01-10

    We report error-free underwater optical transmission measurements at 1 Gbit/s (10(9) bits/s) over a 2 m path in a laboratory water pipe with up to 36 dB of extinction. The source at 532 nm was derived from a 1064 nm continuous-wave laser diode that was intensity modulated, amplified, and frequency doubled in periodically poled lithium niobate. Measurements were made over a range of extinction by the addition of a Mg(OH)(2) and Al(OH)(3) suspension to the water path, and we were not able to observe any evidence of temporal pulse broadening. Results of Monte Carlo simulations over ocean water paths of several tens of meters indicate that optical communication data rates >1 Gbit/s can be supported and are compatible with high-capacity data transfer applications that require no physical contact.

  1. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  2. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    DOE PAGES

    Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.

    2017-10-12

    Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.

  3. Understanding the scaling of electron kinetics in the transition from collisional to collisionless conditions in microscale gas discharges

    NASA Astrophysics Data System (ADS)

    Tan, Xi; Go, David B.

    2018-02-01

    When gas discharge and plasma devices shrink to the microscale, the electrode distance in the device approaches the mean free path of electrons and they experience few collisions. As microscale gas discharge and plasma devices become more prevalent, the behavior of discharges at these collisionless and near-collisionless conditions need to be understood. In conditions where the characteristic length d is much greater than the mean free path λ (i.e., macroscopic conditions), electron energy distributions (EEDs) and rate coefficients scale with the reduced electric field E/p. However, when d is comparable with or much lower than λ, this E/p scaling breaks. In this work, particle-in-cell/Monte Carlo collision simulations are used to explore the behavior of the EED and subsequent reaction rate coefficients in microscale field emission-driven Townsend discharges for both an atomic (argon) and a molecular (hydrogen) gas. To understand the behavior, a pseudo-analytical model is developed for the spatially integrated EED and rate coefficients in the collisional to collisionless transition regime based on the weighted sum of a fully collisional, two-temperature Maxwellian EED and the ballistic EED. The theory helps clarify the relative contribution of ballistic electrons in these extreme conditions and can be used to more accurately predict when macroscopic E/p scaling fails at the microscale.

  4. Comparison of Quantum and Classical Monte Carlo on a Simple Model Phase Transition

    NASA Astrophysics Data System (ADS)

    Cohen, D. E.; Cohen, R. E.

    2005-12-01

    Most simulations of phase transitions in minerals use classical molecular dynamics or classical Monte Carlo. However, it is known that in some cases, quantum effects are quite large, even for perovskite oxides [1]. We have studied the simplest model of a phase transition where this can be tested, that of interacting of double wells with an infinite- range interaction. The energy is E = ∑i (-A xi2 + B xi4 + ξ xi) . We used the same parameters used in a study of vibrational spectra and soft- mode behavior [4], A=0.01902, B=0.14294, ξ=0.025 in Hartree atomic units. This gives Tc of about 400 K. We varied the oscillator mass from 18 to 100. Classical Monte Carlo and path integral Monte Carlo (PIMC) were performed on this model. The maximum effect was for the lightest mass, in which PIMC gave a 75K lower Tc than the classical simulation. This is similar to the reduction in Tc observed in PIMC simulations for BaTiO3 at zero pressure [1]. We will explore the effects of varying the well depths. Shallower wells would show a greater quantum effect, as was seen in the high pressure BaTiO3 simulations, since pressure reduces the double well depths [5]. [1] Iniguez, J. & Vanderbilt, D. First-principles study of the temperature-pressure phase diagram of BaTiO3. Phys. Rev. Lett. 89, 115503 (2002). [2] Gillis, N. S. & Koehler, T. R. Phase transitions in a simple model ferroelectric-- -comparison of exact and variational treatments of a molecular-field Hamiltonian. Phys. Rev. B 9, 3806 (1974). [3] Koehler, T. R. & Gillis, N. S. Phase Transitions in a Model of Interacting Anharmonic Oscillators. Phys. Rev. B 7, 4980 (1973). [4] Flocken, J. W., Guenther, R. A., Hardy, J. R. & Boyer, L. L. Dielectric response spectrum of a damped one-dimensional double-well oscillator. Phys. Rev. B 40, 11496-11501 (1989). [5] Cohen, R. E. Origin of ferroelectricity in oxide ferroelectrics and the difference in ferroelectric behavior of BaTiO3 and PbTiO3. Nature 358, 136-138 (1992).

  5. INNOVATIVE APPROACH FOR MEASURING AMMONIA AND METHANE FLUXES FROM A HOG FARM USING OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROSCOPY

    EPA Science Inventory

    The paper describes a new approach to quantify emissions from area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) technique and computed tomography (CT) technique. In this study, an...

  6. Properties of the two-dimensional heterogeneous Lennard-Jones dimers: An integral equation study

    PubMed Central

    Urbic, Tomaz

    2016-01-01

    Structural and thermodynamic properties of a planar heterogeneous soft dumbbell fluid are examined using Monte Carlo simulations and integral equation theory. Lennard-Jones particles of different sizes are the building blocks of the dimers. The site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions and the thermodynamic properties. Obtained results are compared to Monte Carlo simulation data. The critical parameters for selected types of dimers were also estimated and the influence of the Lennard-Jones parameters was studied. We have also tested the correctness of the site-site integral equation theory using different closures. PMID:27875894

  7. Quantum Dynamics in Biological Systems

    NASA Astrophysics Data System (ADS)

    Shim, Sangwoo

    In the first part of this dissertation, recent efforts to understand quantum mechanical effects in biological systems are discussed. Especially, long-lived quantum coherences observed during the electronic energy transfer process in the Fenna-Matthews-Olson complex at physiological condition are studied extensively using theories of open quantum systems. In addition to the usual master equation based approaches, the effect of the protein structure is investigated in atomistic detail through the combined application of quantum chemistry and molecular dynamics simulations. To evaluate the thermalized reduced density matrix, a path-integral Monte Carlo method with a novel importance sampling approach is developed for excitons coupled to an arbitrary phonon bath at a finite temperature. In the second part of the thesis, simulations of molecular systems and applications to vibrational spectra are discussed. First, the quantum dynamics of a molecule is simulated by combining semiclassical initial value representation and density funcitonal theory with analytic derivatives. A computationally-tractable approximation to the sum-of-states formalism of Raman spectra is subsequently discussed.

  8. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  9. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  10. Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions.

    PubMed

    Karasiev, Valentin V; Dufty, James W; Trickey, S B

    2018-02-16

    Realizing the potential for predictive density functional calculations of matter under extreme conditions depends crucially upon having an exchange-correlation (XC) free-energy functional accurate over a wide range of state conditions. Unlike the ground-state case, no such functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free-energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Its accuracy in the warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated T. Pressure shifts for hot electrons in compressed static fcc Al and for low-density Al demonstrate the combined magnitude of thermal and gradient effects handled well by this functional over a wide T range.

  11. Why are para-hydrogen clusters superfluid? A quantum theorem of corresponding states study.

    PubMed

    Sevryuk, Mikhail B; Toennies, J Peter; Ceperley, David M

    2010-08-14

    The quantum theorem of corresponding states is applied to N=13 and N=26 cold quantum fluid clusters to establish where para-hydrogen clusters lie in relation to more and less quantum delocalized systems. Path integral Monte Carlo calculations of the energies, densities, radial and pair distributions, and superfluid fractions are reported at T=0.5 K for a Lennard-Jones (LJ) (12,6) potential using six different de Boer parameters including the accepted value for hydrogen. The results indicate that the hydrogen clusters are on the borderline to being a nonsuperfluid solid but that the molecules are sufficiently delocalized to be superfluid. A general phase diagram for the total and kinetic energies of LJ (12,6) clusters encompassing all sizes from N=2 to N=infinity and for the entire range of de Boer parameters is presented. Finally the limiting de Boer parameters for quantum delocalization induced unbinding ("quantum unbinding") are estimated and the new results are found to agree with previous calculations for the bulk and smaller clusters.

  12. Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.

    2018-02-01

    Realizing the potential for predictive density functional calculations of matter under extreme conditions depends crucially upon having an exchange-correlation (X C ) free-energy functional accurate over a wide range of state conditions. Unlike the ground-state case, no such functional exists. We remedy that with systematic construction of a generalized gradient approximation X C free-energy functional based on rigorous constraints, including the free-energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T , high-T , and homogeneous electron gas limits. Its accuracy in the warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated T . Pressure shifts for hot electrons in compressed static fcc Al and for low-density Al demonstrate the combined magnitude of thermal and gradient effects handled well by this functional over a wide T range.

  13. Topological charge quantization via path integration: An application of the Kustaanheimo-Stiefel transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inomata, A.; Junker, G.; Wilson, R.

    1993-08-01

    The unified treatment of the Dirac monopole, the Schwinger monopole, and the Aharonov-Bahn problem by Barut and Wilson is revisited via a path integral approach. The Kustaanheimo-Stiefel transformation of space and time is utilized to calculate the path integral for a charged particle in the singular vector potential. In the process of dimensional reduction, a topological charge quantization rule is derived, which contains Dirac's quantization condition as a special case. 32 refs.

  14. High-Throughput Computation and the Applicability of Monte Carlo Integration in Fatigue Load Estimation of Floating Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter A.; Stewart, Gordon; Lackner, Matthew

    Long-term fatigue loads for floating offshore wind turbines are hard to estimate because they require the evaluation of the integral of a highly nonlinear function over a wide variety of wind and wave conditions. Current design standards involve scanning over a uniform rectangular grid of metocean inputs (e.g., wind speed and direction and wave height and period), which becomes intractable in high dimensions as the number of required evaluations grows exponentially with dimension. Monte Carlo integration offers a potentially efficient alternative because it has theoretical convergence proportional to the inverse of the square root of the number of samples, whichmore » is independent of dimension. In this paper, we first report on the integration of the aeroelastic code FAST into NREL's systems engineering tool, WISDEM, and the development of a high-throughput pipeline capable of sampling from arbitrary distributions, running FAST on a large scale, and postprocessing the results into estimates of fatigue loads. Second, we use this tool to run a variety of studies aimed at comparing grid-based and Monte Carlo-based approaches with calculating long-term fatigue loads. We observe that for more than a few dimensions, the Monte Carlo approach can represent a large improvement in computational efficiency, but that as nonlinearity increases, the effectiveness of Monte Carlo is correspondingly reduced. The present work sets the stage for future research focusing on using advanced statistical methods for analysis of wind turbine fatigue as well as extreme loads.« less

  15. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  16. Vector navigation in desert ants, Cataglyphis fortis: celestial compass cues are essential for the proper use of distance information.

    PubMed

    Sommer, Stefan; Wehner, Rüdiger

    2005-10-01

    Foraging desert ants navigate primarily by path integration. They continually update homing direction and distance by employing a celestial compass and an odometer. Here we address the question of whether information about travel distance is correctly used in the absence of directional information. By using linear channels that were partly covered to exclude celestial compass cues, we were able to test the distance component of the path-integration process while suppressing the directional information. Our results suggest that the path integrator cannot process the distance information accumulated by the odometer while ants are deprived of celestial compass information. Hence, during path integration directional cues are a prerequisite for the proper use of travel-distance information by ants.

  17. Detector Design Considerations in High-Dimensional Artificial Immune Systems

    DTIC Science & Technology

    2012-03-22

    a method known as randomized RNS [15]. In this approach, Monte Carlo integration is used to determine the size of self and non-self within the given...feature space, then a number of randomly placed detectors are chosen according to Monte Carlo integration calculations. Simulated annealing is then...detector is only counted once). This value is termed ‘actual content’ because it does not including overlapping content, but only that content that is

  18. Markov chains of infinite order and asymptotic satisfaction of balance: application to the adaptive integration method.

    PubMed

    Earl, David J; Deem, Michael W

    2005-04-14

    Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.

  19. Stochastic evaluation of second-order many-body perturbation energies.

    PubMed

    Willow, Soohaeng Yoo; Kim, Kwang S; Hirata, So

    2012-11-28

    With the aid of the Laplace transform, the canonical expression of the second-order many-body perturbation correction to an electronic energy is converted into the sum of two 13-dimensional integrals, the 12-dimensional parts of which are evaluated by Monte Carlo integration. Weight functions are identified that are analytically normalizable, are finite and non-negative everywhere, and share the same singularities as the integrands. They thus generate appropriate distributions of four-electron walkers via the Metropolis algorithm, yielding correlation energies of small molecules within a few mE(h) of the correct values after 10(8) Monte Carlo steps. This algorithm does away with the integral transformation as the hotspot of the usual algorithms, has a far superior size dependence of cost, does not suffer from the sign problem of some quantum Monte Carlo methods, and potentially easily parallelizable and extensible to other more complex electron-correlation theories.

  20. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  1. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  2. Measurement of J-integral in CAD/CAM dental ceramics and composite resin by digital image correlation.

    PubMed

    Jiang, Yanxia; Akkus, Anna; Roperto, Renato; Akkus, Ozan; Li, Bo; Lang, Lisa; Teich, Sorin

    2016-09-01

    Ceramic and composite resin blocks for CAD/CAM machining of dental restorations are becoming more common. The sample sizes affordable by these blocks are smaller than ideal for stress intensity factor (SIF) based tests. The J-integral measurement calls for full field strain measurement, making it challenging to conduct. Accordingly, the J-integral values of dental restoration materials used in CAD/CAM restorations have not been reported to date. Digital image correlation (DIC) provides full field strain maps, making it possible to calculate the J-integral value. The aim of this study was to measure the J-integral value for CAD/CAM restorative materials. Four types of materials (sintered IPS E-MAX CAD, non-sintered IPS E-MAX CAD, Vita Mark II and Paradigm MZ100) were used to prepare beam samples for three-point bending tests. J-integrals were calculated for different integral path size and locations with respect to the crack tip. J-integral at path 1 for each material was 1.26±0.31×10(-4)MPam for MZ 100, 0.59±0.28×10(-4)MPam for sintered E-MAX, 0.19±0.07×10(-4)MPam for VM II, and 0.21±0.05×10(-4)MPam for non-sintered E-MAX. There were no significant differences between different integral path size, except for the non-sintered E-MAX group. J-integral paths of non-sintered E-MAX located within 42% of the height of the sample provided consistent values whereas outside this range resulted in lower J-integral values. Moreover, no significant difference was found among different integral path locations. The critical SIF was calculated from J-integral (KJ) along with geometry derived SIF values (KI). KI values were comparable with KJ and geometry based SIF values obtained from literature. Therefore, DIC derived J-integral is a reliable way to assess the fracture toughness of small sized specimens for dental CAD/CAM restorative materials; however, with caution applied to the selection of J-integral path. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Monte Carlo Simulation of Massive Absorbers for Cryogenic Calorimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandt, D.; Asai, M.; Brink, P.L.

    There is a growing interest in cryogenic calorimeters with macroscopic absorbers for applications such as dark matter direct detection and rare event search experiments. The physics of energy transport in calorimeters with absorber masses exceeding several grams is made complex by the anisotropic nature of the absorber crystals as well as the changing mean free paths as phonons decay to progressively lower energies. We present a Monte Carlo model capable of simulating anisotropic phonon transport in cryogenic crystals. We have initiated the validation process and discuss the level of agreement between our simulation and experimental results reported in the literature,more » focusing on heat pulse propagation in germanium. The simulation framework is implemented using Geant4, a toolkit originally developed for high-energy physics Monte Carlo simulations. Geant4 has also been used for nuclear and accelerator physics, and applications in medical and space sciences. We believe that our current work may open up new avenues for applications in material science and condensed matter physics.« less

  4. An atomic and molecular fluid model for efficient edge-plasma transport simulations at high densities

    NASA Astrophysics Data System (ADS)

    Rognlien, Thomas; Rensink, Marvin

    2016-10-01

    Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.

  5. A review of path-independent integrals in elastic-plastic fracture mechanics

    NASA Technical Reports Server (NTRS)

    Kim, Kwang S.; Orange, Thomas W.

    1988-01-01

    The objective of this paper is to review the path-independent (P-I) integrals in elastic plastic fracture mechanics which have been proposed in recent years to overcome the limitations imposed on the J-integral. The P-I integrals considered are the J-integral by Rice (1968), the thermoelastic P-I integrals by Wilson and Yu (1979) and Gurtin (1979), the J-integral by Blackburn (1972), the J(theta)-integral by Ainsworth et al. (1978), the J-integral by Kishimoto et al. (1980), and the Delta-T(p) and Delta T(p)-asterisk integrals by Alturi et al. (1982). The theoretical foundation of the P-I integrals is examined with an emphasis on whether or not the path independence is maintained in the presence of nonproportional loading and unloading in the plastic regime, thermal gradient, and material inhomogeneities. The simularities, difference, salient features, and limitations of the P-I integrals are discussed. Comments are also made with regard to the physical meaning, the possibility of experimental measurement, and computational aspects.

  6. A review of path-independent integrals in elastic-plastic fracture mechanics, task 4

    NASA Technical Reports Server (NTRS)

    Kim, K. S.

    1985-01-01

    The path independent (P-I) integrals in elastic plastic fracture mechanics which have been proposed in recent years to overcome the limitations imposed on the J integral are reviewed. The P-I integrals considered herein are the J integral by Rice, the thermoelastic P-I integrals by Wilson and Yu and by Gurtin, the J* integral by Blackburn, the J sub theta integral by Ainsworth et al., the J integral by Kishimoto et al., and the delta T sub p and delta T* sub p integrals by Atluri et al. The theoretical foundation of these P-I integrals is examined with emphasis on whether or not path independence is maintained in the presence of nonproportional loading and unloading in the plastic regime, thermal gradients, and material inhomogeneities. The similarities, differences, salient features, and limitations of these P-I integrals are discussed. Comments are also made with regard to the physical meaning, the possibility of experimental measurement, and computational aspects.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbic, Tomaz, E-mail: tomaz.urbic@fkkt.uni-lj.si; Dias, Cristiano L.

    The thermodynamic and structural properties of the planar soft-sites dumbbell fluid are examined by Monte Carlo simulations and integral equation theory. The dimers are built of two Lennard-Jones segments. Site-site integral equation theory in two dimensions is used to calculate the site-site radial distribution functions for a range of elongations and densities and the results are compared with Monte Carlo simulations. The critical parameters for selected types of dimers were also estimated. We analyze the influence of the bond length on critical point as well as tested correctness of site-site integral equation theory with different closures. The integral equations canmore » be used to predict the phase diagram of dimers whose molecular parameters are known.« less

  8. 77 FR 33486 - Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-06

    ... INTERNATIONAL TRADE COMMISSION [Docket No. 2899] Certain Integrated Circuit Packages Provided With... complaint entitled Certain Integrated Circuit Packages Provided With Multiple Heat-Conducting Paths and..., telephone (202) 205-2000. The public version of the complaint can be accessed on the Commission's electronic...

  9. BOOK REVIEW: Path Integrals in Field Theory: An Introduction

    NASA Astrophysics Data System (ADS)

    Ryder, Lewis

    2004-06-01

    In the 1960s Feynman was known to particle physicists as one of the people who solved the major problems of quantum electrodynamics, his contribution famously introducing what are now called Feynman diagrams. To other physicists he gained a reputation as the author of the Feynman Lectures on Physics; in addition some people were aware of his work on the path integral formulation of quantum theory, and a very few knew about his work on gravitation and Yang--Mills theories, which made use of path integral methods. Forty years later the scene is rather different. Many of the problems of high energy physics are solved; and the standard model incorporates Feynman's path integral method as a way of proving the renormalisability of the gauge (Yang--Mills) theories involved. Gravitation is proving a much harder nut to crack, but here also questions of renormalisability are couched in path-integral language. What is more, theoretical studies of condensed matter physics now also appeal to this technique for quantisation, so the path integral method is becoming part of the standard apparatus of theoretical physics. Chapters on it appear in a number of recent books, and a few books have appeared devoted to this topic alone; the book under review is a very recent one. Path integral techniques have the advantage of enormous conceptual appeal and the great disadvantage of mathematical complexity, this being partly the result of messy integrals but more fundamentally due to the notions of functional differentiation and integration which are involved in the method. All in all this subject is not such an easy ride. Mosel's book, described as an introduction, is aimed at graduate students and research workers in particle physics. It assumes a background knowledge of quantum mechanics, both non-relativistic and relativistic. After three chapters on the path integral formulation of non-relativistic quantum mechanics there are eight chapters on scalar and spinor field theory, followed by three on gauge field theories---quantum electrodynamics and Yang--Mills theories, Faddeev--Popov ghosts and so on.There is no treatment of the quantisation of gravity.Thus in about 200 pages the reader has the chance to learn in some detail about a most important area of modern physics. The subject is tough but the style is clear and pedagogic, results for the most part being derived explicitly. The choice of topics included is main-stream and sensible and one has a clear sense that the author knows where he is going and is a reliable guide. Path Integrals in Field Theory is clearly the work of a man with considerable teaching experience and is recommended as a readable and helpful account of a rather non-trivial subject.

  10. Modelling rapid subsurface flow at the hillslope scale with explicit representation of preferential flow paths

    NASA Astrophysics Data System (ADS)

    Wienhöfer, J.; Zehe, E.

    2012-04-01

    Rapid lateral flow processes via preferential flow paths are widely accepted to play a key role for rainfall-runoff response in temperate humid headwater catchments. A quantitative description of these processes, however, is still a major challenge in hydrological research, not least because detailed information about the architecture of subsurface flow paths are often impossible to obtain at a natural site without disturbing the system. Our study combines physically based modelling and field observations with the objective to better understand how flow network configurations influence the hydrological response of hillslopes. The system under investigation is a forested hillslope with a small perennial spring at the study area Heumöser, a headwater catchment of the Dornbirnerach in Vorarlberg, Austria. In-situ points measurements of field-saturated hydraulic conductivity and dye staining experiments at the plot scale revealed that shrinkage cracks and biogenic macropores function as preferential flow paths in the fine-textured soils of the study area, and these preferential flow structures were active in fast subsurface transport of artificial tracers at the hillslope scale. For modelling of water and solute transport, we followed the approach of implementing preferential flow paths as spatially explicit structures of high hydraulic conductivity and low retention within the 2D process-based model CATFLOW. Many potential configurations of the flow path network were generated as realisations of a stochastic process informed by macropore characteristics derived from the plot scale observations. Together with different realisations of soil hydraulic parameters, this approach results in a Monte Carlo study. The model setups were used for short-term simulation of a sprinkling and tracer experiment, and the results were evaluated against measured discharges and tracer breakthrough curves. Although both criteria were taken for model evaluation, still several model setups produced acceptable matches to the observed behaviour. These setups were selected for long-term simulation, the results of which were compared against water level measurements at two piezometers along the hillslope and the integral discharge response of the spring to reject some non-behavioural model setups and further reduce equifinality. The results of this study indicate that process-based modelling can provide a means to distinguish preferential flow networks on the hillslope scale when complementary measurements to constrain the range of behavioural model setups are available. These models can further be employed as a virtual reality to investigate the characteristics of flow path architectures and explore effective parameterisations for larger scale applications.

  11. Correlational and thermodynamic properties of finite-temperature electron liquids in the hypernetted-chain approximation.

    PubMed

    Tanaka, Shigenori

    2016-12-07

    Correlational and thermodynamic properties of homogeneous electron liquids at finite temperatures are theoretically analyzed in terms of dielectric response formalism with the hypernetted-chain (HNC) approximation and its modified version. The static structure factor and the local-field correction to describe the strong Coulomb-coupling effects beyond the random-phase approximation are self-consistently calculated through solution to integral equations in the paramagnetic (spin unpolarized) and ferromagnetic (spin polarized) states. In the ground state with the normalized temperature θ=0, the present HNC scheme well reproduces the exchange-correlation energies obtained by quantum Monte Carlo (QMC) simulations over the whole fluid phase (the coupling constant r s ≤100), i.e., within 1% and 2% deviations from putative best QMC values in the paramagnetic and ferromagnetic states, respectively. As compared with earlier studies based on the Singwi-Tosi-Land-Sjölander and modified convolution approximations, some improvements on the correlation energies and the correlation functions including the compressibility sum rule are found in the intermediate to strong coupling regimes. When applied to the electron fluids at intermediate Fermi degeneracies (θ≈1), the static structure factors calculated in the HNC scheme show good agreements with the results obtained by the path integral Monte Carlo (PIMC) simulation, while a small negative region in the radial distribution function is observed near the origin, which may be associated with a slight overestimation for the exchange-correlation hole in the HNC approximation. The interaction energies are calculated for various combinations of density and temperature parameters ranging from strong to weak degeneracy and from weak to strong coupling, and the HNC values are then parametrized as functions of r s and θ. The HNC exchange-correlation free energies obtained through the coupling-constant integration show reasonable agreements with earlier results including the PIMC-based fitting over the whole fluid region at finite degeneracies in the paramagnetic state. In contrast, a systematic difference between the HNC and PIMC results is observed in the ferromagnetic state, which suggests a necessity of further studies on the exchange-correlation free energies from both aspects of analytical theory and simulation.

  12. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  13. Real-time Feynman path integral with Picard–Lefschetz theory and its applications to quantum tunneling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanizaki, Yuya, E-mail: yuya.tanizaki@riken.jp; Theoretical Research Division, Nishina Center, RIKEN, Wako 351-0198; Koike, Takayuki, E-mail: tkoike@ms.u-tokyo.ac.jp

    Picard–Lefschetz theory is applied to path integrals of quantum mechanics, in order to compute real-time dynamics directly. After discussing basic properties of real-time path integrals on Lefschetz thimbles, we demonstrate its computational method in a concrete way by solving three simple examples of quantum mechanics. It is applied to quantum mechanics of a double-well potential, and quantum tunneling is discussed. We identify all of the complex saddle points of the classical action, and their properties are discussed in detail. However a big theoretical difficulty turns out to appear in rewriting the original path integral into a sum of path integralsmore » on Lefschetz thimbles. We discuss generality of that problem and mention its importance. Real-time tunneling processes are shown to be described by those complex saddle points, and thus semi-classical description of real-time quantum tunneling becomes possible on solid ground if we could solve that problem. - Highlights: • Real-time path integral is studied based on Picard–Lefschetz theory. • Lucid demonstration is given through simple examples of quantum mechanics. • This technique is applied to quantum mechanics of the double-well potential. • Difficulty for practical applications is revealed, and we discuss its generality. • Quantum tunneling is shown to be closely related to complex classical solutions.« less

  14. Coupled electron-ion Monte Carlo simulation of hydrogen molecular crystals

    NASA Astrophysics Data System (ADS)

    Rillo, Giovanni; Morales, Miguel A.; Ceperley, David M.; Pierleoni, Carlo

    2018-03-01

    We performed simulations for solid molecular hydrogen at high pressures (250 GPa ≤ P ≤ 500 GPa) along two isotherms at T = 200 K (phase III) and at T = 414 K (phase IV). At T = 200 K, we considered likely candidates for phase III, the C2c and Cmca12 structures, while at T = 414 K in phase IV, we studied the Pc48 structure. We employed both Coupled Electron-Ion Monte Carlo (CEIMC) and Path Integral Molecular Dynamics (PIMD). The latter is based on Density Functional Theory (DFT) with the van der Waals approximation (vdW-DF). The comparison between the two methods allows us to address the question of the accuracy of the exchange-correlation approximation of DFT for thermal and quantum protons without recurring to perturbation theories. In general, we find that atomic and molecular fluctuations in PIMD are larger than in CEIMC which suggests that the potential energy surface from vdW-DF is less structured than the one from quantum Monte Carlo. We find qualitatively different behaviors for systems prepared in the C2c structure for increasing pressure. Within PIMD, the C2c structure is dynamically partially stable for P ≤ 250 GPa only: it retains the symmetry of the molecular centers but not the molecular orientation; at intermediate pressures, it develops layered structures like Pbcn or Ibam and transforms to the metallic Cmca-4 structure at P ≥ 450 GPa. Instead, within CEIMC, the C2c structure is found to be dynamically stable at least up to 450 GPa; at increasing pressure, the molecular bond length increases and the nuclear correlation decreases. For the other two structures, the two methods are in qualitative agreement although quantitative differences remain. We discuss various structural properties and the electrical conductivity. We find that these structures become conducting around 350 GPa but the metallic Drude-like behavior is reached only at around 500 GPa, consistent with recent experimental claims.

  15. A comparative study of inelastic scattering models at energy levels ranging from 0.5 keV to 10 keV

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Yu; Lin, Chun-Hung

    2017-03-01

    Six models, including a single-scattering model, four hybrid models, and one dielectric function model, were evaluated using Monte Carlo simulations for aluminum and copper at incident beam energies ranging from 0.5 keV to 10 keV. The inelastic mean free path, mean energy loss per unit path length, and backscattering coefficients obtained by these models are compared and discussed to understand the merits of the various models. ANOVA (analysis of variance) statistical models were used to quantify the effects of inelastic cross section and energy loss models on the basis of the simulated results deviation from the experimental data for the inelastic mean free path, the mean energy loss per unit path length, and the backscattering coefficient, as well as their correlations. This work in this study is believed to be the first application of ANOVA models towards evaluating inelastic electron beam scattering models. This approach is an improvement over the traditional approach which involves only visual estimation of the difference between the experimental data and simulated results. The data suggests that the optimization of the effective electron number per atom, binding energy, and cut-off energy of an inelastic model for different materials at different beam energies is more important than the selection of inelastic models for Monte Carlo electron scattering simulation. During the simulations, parameters in the equations should be tuned according to different materials for different beam energies rather than merely employing default parameters for an arbitrary material. Energy loss models and cross-section formulas are not the main factors influencing energy loss. Comparison of the deviation of the simulated results from the experimental data shows a significant correlation (p < 0.05) between the backscattering coefficient and energy loss per unit path length. The inclusion of backscattering electrons generated by both primary and secondary electrons for backscattering coefficient simulation is recommended for elements with high atomic numbers. In hybrid models, introducing the inner shell ionization model improves the accuracy of simulated results.

  16. Weinberg propagator of a free massive particle with an arbitrary spin from the BFV-BRST path integral

    NASA Astrophysics Data System (ADS)

    Zima, V. G.; Fedoruk, S. O.

    1999-11-01

    The transition amplitude is obtained for a free massive particle of arbitrary spin by calculating the path integral in the index-spinor formulation within the BFV-BRST approach. No renormalizations of the path integral measure were applied. The calculation has given the Weinberg propagator written in the index-free form by the use of an index spinor. The choice of boundary conditions on the index spinor determines the holomorphic or antiholomorphic representation for the canonical description of particle/antiparticle spin.

  17. Crossing Boundaries: Nativity, Ethnicity, and Mate Selection

    PubMed Central

    Qian, Zhenchao; Glick, Jennifer E.; Baston, Christie

    2016-01-01

    The influx of immigrants has increased diversity among ethnic minorities and indicates that they may take multiple integration paths in American society. Previous research on ethnic integration often focuses on panethnic differences and few have explored ethnic diversity within a racial or panethnic context. Using 2000 U.S. census data for Puerto Rican, Mexican, Chinese, and Filipino origin individuals, we examine differences in marriage and cohabitation with whites, with other minorities, within a panethnic group, and within an ethnic group by nativity status. Ethnic endogamy is strong and, to a less extent, so is panethnic endogamy. Yet, marital or cohabiting unions with whites remain an important path of integration but differ significantly by ethnicity, nativity, age at arrival, and educational attainment. Meanwhile, ethnic differences in marriage and cohabitation with other racial or ethnic minorities are strong. Our analysis supports that unions with whites remain a major path of integration, but other paths of integration also become viable options for all ethnic groups. PMID:22350840

  18. Multi-Dimensional, Mesoscopic Monte Carlo Simulations of Inhomogeneous Reaction-Drift-Diffusion Systems on Graphics-Processing Units

    PubMed Central

    Vigelius, Matthias; Meyer, Bernd

    2012-01-01

    For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

  19. Lefschetz thimbles in fermionic effective models with repulsive vector-field

    NASA Astrophysics Data System (ADS)

    Mori, Yuto; Kashiwa, Kouji; Ohnishi, Akira

    2018-06-01

    We discuss two problems in complexified auxiliary fields in fermionic effective models, the auxiliary sign problem associated with the repulsive vector-field and the choice of the cut for the scalar field appearing from the logarithmic function. In the fermionic effective models with attractive scalar and repulsive vector-type interaction, the auxiliary scalar and vector fields appear in the path integral after the bosonization of fermion bilinears. When we make the path integral well-defined by the Wick rotation of the vector field, the oscillating Boltzmann weight appears in the partition function. This "auxiliary" sign problem can be solved by using the Lefschetz-thimble path-integral method, where the integration path is constructed in the complex plane. Another serious obstacle in the numerical construction of Lefschetz thimbles is caused by singular points and cuts induced by multivalued functions of the complexified scalar field in the momentum integration. We propose a new prescription which fixes gradient flow trajectories on the same Riemann sheet in the flow evolution by performing the momentum integration in the complex domain.

  20. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    PubMed Central

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  1. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  2. Which coordinate system for modelling path integration?

    PubMed

    Vickerstaff, Robert J; Cheung, Allen

    2010-03-21

    Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Leaky Waves in Metamaterials for Antenna Applications

    DTIC Science & Technology

    2011-07-01

    excitation problems, electromagnetic fields are often represented as Sommerfeld integrals [31,32]. A detailed discussion about Sommerfeld integral is...source removed. In the rest of this section, a de- tailed discussion about Sommerfeld Integral Path is presented. 4.1 Spectral Domain Approach 4.1.1... Sommerfeld integral path for evaluating fields accurately and efficiently, the radiation intensity and directivity of electric/magnetic dipoles over a grounded

  4. Nuclide Depletion Capabilities in the Shift Monte Carlo Code

    DOE PAGES

    Davidson, Gregory G.; Pandya, Tara M.; Johnson, Seth R.; ...

    2017-12-21

    A new depletion capability has been developed in the Exnihilo radiation transport code suite. This capability enables massively parallel domain-decomposed coupling between the Shift continuous-energy Monte Carlo solver and the nuclide depletion solvers in ORIGEN to perform high-performance Monte Carlo depletion calculations. This paper describes this new depletion capability and discusses its various features, including a multi-level parallel decomposition, high-order transport-depletion coupling, and energy-integrated power renormalization. Several test problems are presented to validate the new capability against other Monte Carlo depletion codes, and the parallel performance of the new capability is analyzed.

  5. Transforming high-dimensional potential energy surfaces into sum-of-products form using Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Schröder, Markus; Meyer, Hans-Dieter

    2017-08-01

    We propose a Monte Carlo method, "Monte Carlo Potfit," for transforming high-dimensional potential energy surfaces evaluated on discrete grid points into a sum-of-products form, more precisely into a Tucker form. To this end we use a variational ansatz in which we replace numerically exact integrals with Monte Carlo integrals. This largely reduces the numerical cost by avoiding the evaluation of the potential on all grid points and allows a treatment of surfaces up to 15-18 degrees of freedom. We furthermore show that the error made with this ansatz can be controlled and vanishes in certain limits. We present calculations on the potential of HFCO to demonstrate the features of the algorithm. To demonstrate the power of the method, we transformed a 15D potential of the protonated water dimer (Zundel cation) in a sum-of-products form and calculated the ground and lowest 26 vibrationally excited states of the Zundel cation with the multi-configuration time-dependent Hartree method.

  6. Koopman-von Neumann formulation of classical Yang-Mills theories: I

    NASA Astrophysics Data System (ADS)

    Carta, P.; Gozzi, E.; Mauro, D.

    2006-03-01

    In this paper we present the Koopman-von Neumann (KvN) formulation of classical non-Abelian gauge field theories. In particular we shall explore the functional (or classical path integral) counterpart of the KvN method. In the quantum path integral quantization of Yang-Mills theories concepts like gauge-fixing and Faddeev-Popov determinant appear in a quite natural way. We will prove that these same objects are needed also in this classical path integral formulation for Yang-Mills theories. We shall also explore the classical path integral counterpart of the BFV formalism and build all the associated universal and gauge charges. These last are quite different from the analog quantum ones and we shall show the relation between the two. This paper lays the foundation of this formalism which, due to the many auxiliary fields present, is rather heavy. Applications to specific topics outlined in the paper will appear in later publications.

  7. A Statistical Simulation Approach to Safe Life Fatigue Analysis of Redundant Metallic Components

    NASA Technical Reports Server (NTRS)

    Matthews, William T.; Neal, Donald M.

    1997-01-01

    This paper introduces a dual active load path fail-safe fatigue design concept analyzed by Monte Carlo simulation. The concept utilizes the inherent fatigue life differences between selected pairs of components for an active dual path system, enhanced by a stress level bias in one component. The design is applied to a baseline design; a safe life fatigue problem studied in an American Helicopter Society (AHS) round robin. The dual active path design is compared with a two-element standby fail-safe system and the baseline design for life at specified reliability levels and weight. The sensitivity of life estimates for both the baseline and fail-safe designs was examined by considering normal and Weibull distribution laws and coefficient of variation levels. Results showed that the biased dual path system lifetimes, for both the first element failure and residual life, were much greater than for standby systems. The sensitivity of the residual life-weight relationship was not excessive at reliability levels up to R = 0.9999 and the weight penalty was small. The sensitivity of life estimates increases dramatically at higher reliability levels.

  8. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  9. Option pricing, stochastic volatility, singular dynamics and constrained path integrals

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Hojman, Sergio A.

    2014-01-01

    Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.

  10. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  11. Treating electron transport in MCNP{sup trademark}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, H.G.

    1996-12-31

    The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less

  12. path integral approach to closed form pricing formulas in the Heston framework.

    NASA Astrophysics Data System (ADS)

    Lemmens, Damiaan; Wouters, Michiel; Tempere, Jacques; Foulon, Sven

    2008-03-01

    We present a path integral approach for finding closed form formulas for option prices in the framework of the Heston model. The first model for determining option prices was the Black-Scholes model, which assumed that the logreturn followed a Wiener process with a given drift and constant volatility. To provide a realistic description of the market, the Black-Scholes results must be extended to include stochastic volatility. This is achieved by the Heston model, which assumes that the volatility follows a mean reverting square root process. Current applications of the Heston model are hampered by the unavailability of fast numerical methods, due to a lack of closed-form formulae. Therefore the search for closed form solutions is an essential step before the qualitatively better stochastic volatility models will be used in practice. To attain this goal we outline a simplified path integral approach yielding straightforward results for vanilla Heston options with correlation. Extensions to barrier options and other path-dependent option are discussed, and the new derivation is compared to existing results obtained from alternative path-integral approaches (Dragulescu, Kleinert).

  13. Path integration in tactile perception of shapes.

    PubMed

    Moscatelli, Alessandro; Naceri, Abdeldjallil; Ernst, Marc O

    2014-11-01

    Whenever we move the hand across a surface, tactile signals provide information about the relative velocity between the skin and the surface. If the system were able to integrate the tactile velocity information over time, cutaneous touch may provide an estimate of the relative displacement between the hand and the surface. Here, we asked whether humans are able to form a reliable representation of the motion path from tactile cues only, integrating motion information over time. In order to address this issue, we conducted three experiments using tactile motion and asked participants (1) to estimate the length of a simulated triangle, (2) to reproduce the shape of a simulated triangular path, and (3) to estimate the angle between two-line segments. Participants were able to accurately indicate the length of the path, whereas the perceived direction was affected by a direction bias (inward bias). The response pattern was thus qualitatively similar to the ones reported in classical path integration studies involving locomotion. However, we explain the directional biases as the result of a tactile motion aftereffect. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Light transport on path-space manifolds

    NASA Astrophysics Data System (ADS)

    Jakob, Wenzel Alban

    The pervasive use of computer-generated graphics in our society has led to strict demands on their visual realism. Generally, users of rendering software want their images to look, in various ways, "real", which has been a key driving force towards methods that are based on the physics of light transport. Until recently, industrial practice has relied on a different set of methods that had comparatively little rigorous grounding in physics---but within the last decade, advances in rendering methods and computing power have come together to create a sudden and dramatic shift, in which physics-based methods that were formerly thought impractical have become the standard tool. As a consequence, considerable attention is now devoted towards making these methods as robust as possible. In this context, robustness refers to an algorithm's ability to process arbitrary input without large increases of the rendering time or degradation of the output image. One particularly challenging aspect of robustness entails simulating the precise interaction of light with all the materials that comprise the input scene. This dissertation focuses on one specific group of materials that has fundamentally been the most important source of difficulties in this process. Specular materials, such as glass windows, mirrors or smooth coatings (e.g. on finished wood), account for a significant percentage of the objects that surround us every day. It is perhaps surprising, then, that it is not well-understood how they can be accommodated within the theoretical framework that underlies some of the most sophisticated rendering methods available today. Many of these methods operate using a theoretical framework known as path space integration. But this framework makes no provisions for specular materials: to date, it is not clear how to write down a path space integral involving something as simple as a piece of glass. Although implementations can in practice still render these materials by side-stepping limitations of the theory, they often suffer from unusably slow convergence; improvements to this situation have been hampered by the lack of a thorough theoretical understanding. We address these problems by developing a new theory of path-space light transport which, for the first time, cleanly incorporates specular scattering into the standard framework. Most of the results obtained in the analysis of the ideally smooth case can also be generalized to rendering of glossy materials and volumetric scattering so that this dissertation also provides a powerful new set of tools for dealing with them. The basis of our approach is that each specular material interaction locally collapses the dimension of the space of light paths so that all relevant paths lie on a submanifold of path space. We analyze the high-dimensional differential geometry of this submanifold and use the resulting information to construct an algorithm that is able to "walk" around on it using a simple and efficient equation-solving iteration. This manifold walking algorithm then constitutes the key operation of a new type of Markov Chain Monte Carlo (MCMC) rendering method that computes lighting through very general families of paths that can involve arbitrary combinations of specular, near-specular, glossy, and diffuse surface interactions as well as isotropic or highly anisotropic volume scattering. We demonstrate our implementation on a range of challenging scenes and evaluate it against previous methods.

  15. Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Zhou, Zhennan

    2018-02-01

    To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

  16. Bayesian Analysis of Evolutionary Divergence with Genomic Data under Diverse Demographic Models.

    PubMed

    Chung, Yujin; Hey, Jody

    2017-06-01

    We present a new Bayesian method for estimating demographic and phylogenetic history using population genomic data. Several key innovations are introduced that allow the study of diverse models within an Isolation-with-Migration framework. The new method implements a 2-step analysis, with an initial Markov chain Monte Carlo (MCMC) phase that samples simple coalescent trees, followed by the calculation of the joint posterior density for the parameters of a demographic model. In step 1, the MCMC sampling phase, the method uses a reduced state space, consisting of coalescent trees without migration paths, and a simple importance sampling distribution without the demography of interest. Once obtained, a single sample of trees can be used in step 2 to calculate the joint posterior density for model parameters under multiple diverse demographic models, without having to repeat MCMC runs. Because migration paths are not included in the state space of the MCMC phase, but rather are handled by analytic integration in step 2 of the analysis, the method is scalable to a large number of loci with excellent MCMC mixing properties. With an implementation of the new method in the computer program MIST, we demonstrate the method's accuracy, scalability, and other advantages using simulated data and DNA sequences of two common chimpanzee subspecies: Pan troglodytes (P. t.) troglodytes and P. t. verus. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.

    PubMed

    Thomson, Rowan M; Kawrakow, Iwan

    2011-08-01

    The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.

  18. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  19. Integrated Flight Path Planning System and Flight Control System for Unmanned Helicopters

    PubMed Central

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM). PMID:22164029

  20. Integrated flight path planning system and flight control system for unmanned helicopters.

    PubMed

    Jan, Shau Shiun; Lin, Yu Hsiang

    2011-01-01

    This paper focuses on the design of an integrated navigation and guidance system for unmanned helicopters. The integrated navigation system comprises two systems: the Flight Path Planning System (FPPS) and the Flight Control System (FCS). The FPPS finds the shortest flight path by the A-Star (A*) algorithm in an adaptive manner for different flight conditions, and the FPPS can add a forbidden zone to stop the unmanned helicopter from crossing over into dangerous areas. In this paper, the FPPS computation time is reduced by the multi-resolution scheme, and the flight path quality is improved by the path smoothing methods. Meanwhile, the FCS includes the fuzzy inference systems (FISs) based on the fuzzy logic. By using expert knowledge and experience to train the FIS, the controller can operate the unmanned helicopter without dynamic models. The integrated system of the FPPS and the FCS is aimed at providing navigation and guidance to the mission destination and it is implemented by coupling the flight simulation software, X-Plane, and the computing software, MATLAB. Simulations are performed and shown in real time three-dimensional animations. Finally, the integrated system is demonstrated to work successfully in controlling the unmanned helicopter to operate in various terrains of a digital elevation model (DEM).

  1. Covariant path integrals on hyperbolic surfaces

    NASA Astrophysics Data System (ADS)

    Schaefer, Joe

    1997-11-01

    DeWitt's covariant formulation of path integration [B. De Witt, "Dynamical theory in curved spaces. I. A review of the classical and quantum action principles," Rev. Mod. Phys. 29, 377-397 (1957)] has two practical advantages over the traditional methods of "lattice approximations;" there is no ordering problem, and classical symmetries are manifestly preserved at the quantum level. Applying the spectral theorem for unbounded self-adjoint operators, we provide a rigorous proof of the convergence of certain path integrals on Riemann surfaces of constant curvature -1. The Pauli-DeWitt curvature correction term arises, as in DeWitt's work. Introducing a Fuchsian group Γ of the first kind, and a continuous, bounded, Γ-automorphic potential V, we obtain a Feynman-Kac formula for the automorphic Schrödinger equation on the Riemann surface ΓH. We analyze the Wick rotation and prove the strong convergence of the so-called Feynman maps [K. D. Elworthy, Path Integration on Manifolds, Mathematical Aspects of Superspace, edited by Seifert, Clarke, and Rosenblum (Reidel, Boston, 1983), pp. 47-90] on a dense set of states. Finally, we give a new proof of some results in C. Grosche and F. Steiner, "The path integral on the Poincare upper half plane and for Liouville quantum mechanics," Phys. Lett. A 123, 319-328 (1987).

  2. Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.

    PubMed

    Tan, Zhenyu; Liu, Wei

    2014-05-01

    The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.

  3. Acceleration of Monte Carlo SPECT simulation using convolution-based forced detection

    NASA Astrophysics Data System (ADS)

    de Jong, H. W. A. M.; Slijpen, E. T. P.; Beekman, F. J.

    2001-02-01

    Monte Carlo (MC) simulation is an established tool to calculate photon transport through tissue in Emission Computed Tomography (ECT). Since the first appearance of MC a large variety of variance reduction techniques (VRT) have been introduced to speed up these notoriously slow simulations. One example of a very effective and established VRT is known as forced detection (FD). In standard FD the path from the photon's scatter position to the camera is chosen stochastically from the appropriate probability density function (PDF), modeling the distance-dependent detector response. In order to speed up MC the authors propose a convolution-based FD (CFD) which involves replacing the sampling of the PDF by a convolution with a kernel which depends on the position of the scatter event. The authors validated CFD for parallel-hole Single Photon Emission Computed Tomography (SPECT) using a digital thorax phantom. Comparison of projections estimated with CFD and standard FD shows that both estimates converge to practically identical projections (maximum bias 0.9% of peak projection value), despite the slightly different photon paths used in CFD and standard FD. Projections generated with CFD converge, however, to a noise-free projection up to one or two orders of magnitude faster, which is extremely useful in many applications such as model-based image reconstruction.

  4. Inverse Monte Carlo method in a multilayered tissue model for diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Fredriksson, Ingemar; Larsson, Marcus; Strömberg, Tomas

    2012-04-01

    Model based data analysis of diffuse reflectance spectroscopy data enables the estimation of optical and structural tissue parameters. The aim of this study was to present an inverse Monte Carlo method based on spectra from two source-detector distances (0.4 and 1.2 mm), using a multilayered tissue model. The tissue model variables include geometrical properties, light scattering properties, tissue chromophores such as melanin and hemoglobin, oxygen saturation and average vessel diameter. The method utilizes a small set of presimulated Monte Carlo data for combinations of different levels of epidermal thickness and tissue scattering. The path length distributions in the different layers are stored and the effect of the other parameters is added in the post-processing. The accuracy of the method was evaluated using Monte Carlo simulations of tissue-like models containing discrete blood vessels, evaluating blood tissue fraction and oxygenation. It was also compared to a homogeneous model. The multilayer model performed better than the homogeneous model and all tissue parameters significantly improved spectral fitting. Recorded in vivo spectra were fitted well at both distances, which we previously found was not possible with a homogeneous model. No absolute intensity calibration is needed and the algorithm is fast enough for real-time processing.

  5. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body bases.

    PubMed

    Reboredo, Fernando A; Kim, Jeongnim

    2014-02-21

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  6. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    NASA Astrophysics Data System (ADS)

    Reboredo, Fernando A.; Kim, Jeongnim

    2014-02-01

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.

  7. Modeling a Single SEP Event from Multiple Vantage Points Using the iPATH Model

    NASA Astrophysics Data System (ADS)

    Hu, Junxiang; Li, Gang; Fu, Shuai; Zank, Gary; Ao, Xianzhi

    2018-02-01

    Using the recently extended 2D improved Particle Acceleration and Transport in the Heliosphere (iPATH) model, we model an example gradual solar energetic particle event as observed at multiple locations. Protons and ions that are energized via the diffusive shock acceleration mechanism are followed at a 2D coronal mass ejection-driven shock where the shock geometry varies across the shock front. The subsequent transport of energetic particles, including cross-field diffusion, is modeled by a Monte Carlo code that is based on a stochastic differential equation method. Time intensity profiles and particle spectra at multiple locations and different radial distances, separated in longitudes, are presented. The results shown here are relevant to the upcoming Parker Solar Probe mission.

  8. Radiative transport equation for the Mittag-Leffler path length distribution

    NASA Astrophysics Data System (ADS)

    Liemert, André; Kienle, Alwin

    2017-05-01

    In this paper, we consider the radiative transport equation for infinitely extended scattering media that are characterized by the Mittag-Leffler path length distribution p (ℓ ) =-∂ℓEα(-σtℓα ) , which is a generalization of the usually assumed Lambert-Beer law p (ℓ ) =σtexp(-σtℓ ) . In this context, we derive the infinite-space Green's function of the underlying fractional transport equation for the spherically symmetric medium as well as for the one-dimensional string. Moreover, simple analytical solutions are presented for the prediction of the radiation field in the single-scattering approximation. The resulting equations are compared with Monte Carlo simulations in the steady-state and time domain showing, within the stochastic nature of the simulations, an excellent agreement.

  9. Energy-optimal path planning by stochastic dynamically orthogonal level-set optimization

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Lermusiaux, Pierre F. J.

    2016-04-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. Based on partial differential equations, the methodology rigorously leverages the level-set equation that governs time-optimal reachability fronts for a given relative vehicle-speed function. To set up the energy optimization, the relative vehicle-speed and headings are considered to be stochastic and new stochastic Dynamically Orthogonal (DO) level-set equations are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. Numerical schemes to solve the reduced stochastic DO level-set equations are obtained, and accuracy and efficiency considerations are discussed. These reduced equations are first shown to be efficient at solving the governing stochastic level-sets, in part by comparisons with direct Monte Carlo simulations. To validate the methodology and illustrate its accuracy, comparisons with semi-analytical energy-optimal path solutions are then completed. In particular, we consider the energy-optimal crossing of a canonical steady front and set up its semi-analytical solution using a energy-time nested nonlinear double-optimization scheme. We then showcase the inner workings and nuances of the energy-optimal path planning, considering different mission scenarios. Finally, we study and discuss results of energy-optimal missions in a wind-driven barotropic quasi-geostrophic double-gyre ocean circulation.

  10. Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

    2007-01-01

    We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

  11. Importance of finite-temperature exchange correlation for warm dense matter calculations.

    PubMed

    Karasiev, Valentin V; Calderín, Lázaro; Trickey, S B

    2016-06-01

    The effects of an explicit temperature dependence in the exchange correlation (XC) free-energy functional upon calculated properties of matter in the warm dense regime are investigated. The comparison is between the Karasiev-Sjostrom-Dufty-Trickey (KSDT) finite-temperature local-density approximation (TLDA) XC functional [Karasiev et al., Phys. Rev. Lett. 112, 076403 (2014)PRLTAO0031-900710.1103/PhysRevLett.112.076403] parametrized from restricted path-integral Monte Carlo data on the homogeneous electron gas (HEG) and the conventional Monte Carlo parametrization ground-state LDA XC [Perdew-Zunger (PZ)] functional evaluated with T-dependent densities. Both Kohn-Sham (KS) and orbital-free density-functional theories are used, depending upon computational resource demands. Compared to the PZ functional, the KSDT functional generally lowers the dc electrical conductivity of low-density Al, yielding improved agreement with experiment. The greatest lowering is about 15% for T=15 kK. Correspondingly, the KS band structure of low-density fcc Al from the KSDT functional exhibits a clear increase in interband separation above the Fermi level compared to the PZ bands. In some density-temperature regimes, the deuterium equations of state obtained from the two XC functionals exhibit pressure differences as large as 4% and a 6% range of differences. However, the hydrogen principal Hugoniot is insensitive to the explicit XC T dependence because of cancellation between the energy and pressure-volume work difference terms in the Rankine-Hugoniot equation. Finally, the temperature at which the HEG becomes unstable is T≥7200 K for the T-dependent XC, a result that the ground-state XC underestimates by about 1000 K.

  12. Multispectral scanner system parameter study and analysis software system description, volume 2

    NASA Technical Reports Server (NTRS)

    Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.

    1978-01-01

    The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.

  13. Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield

    NASA Astrophysics Data System (ADS)

    Cramer, S. N.; Roussin, R. W.

    1981-11-01

    A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.

  14. Neural basis of the cognitive map: path integration does not require hippocampus or entorhinal cortex.

    PubMed

    Shrager, Yael; Kirwan, C Brock; Squire, Larry R

    2008-08-19

    The hippocampus and entorhinal cortex have been linked to both memory functions and to spatial cognition, but it has been unclear how these ideas relate to each other. An important part of spatial cognition is the ability to keep track of a reference location using self-motion cues (sometimes referred to as path integration), and it has been suggested that the hippocampus or entorhinal cortex is essential for this ability. Patients with hippocampal lesions or larger lesions that also included entorhinal cortex were led on paths while blindfolded (up to 15 m in length) and were asked to actively maintain the path in mind. Patients pointed to and estimated their distance from the start location as accurately as controls. A rotation condition confirmed that performance was based on self-motion cues. When demands on long-term memory were increased, patients were impaired. Thus, in humans, the hippocampus and entorhinal cortex are not essential for path integration.

  15. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  16. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  17. Addendum to "Free energies from integral equation theories: enforcing path independence".

    PubMed

    Kast, Stefan M

    2006-01-01

    The variational formalism developed for the analysis of the path dependence of free energies from integral equation theories [S. M. Kast, Phys. Rev. E 67, 041203 (2003)] is extended in order to allow for the three-dimensional treatment of arbitrarily shaped solutes.

  18. A theory for the radiation of magnetohydrodynamic surface waves and body waves into the solar corona

    NASA Technical Reports Server (NTRS)

    Davila, Joseph M.

    1988-01-01

    The Green's function for the slab coronal hole is obtained explicitly. The Fourier integral representation for the radiated field inside and outside the coronal hole waveguide is obtained. The radiated field outside the coronal hole is calculated using the method of steepest descents. It is shown that the radiated field can be written as the sum of two contributions: (1) a contribution from the integral along the steepest descent path and (2) a contribution from all the poles of the integrand between the path of the original integral and the steepest descent path. The free oscillations of the waveguide can be associated with the pole contributions in the steepest descent representation for the Green's function. These pole contributions are essentially generalized surface waves with a maximum amplitude near the interface which separates the plasma inside the coronal hole from the surrounding background corona. The path contribution to the integral is essentially the power radiated in body waves.

  19. Self-organizing path integration using a linked continuous attractor and competitive network: path integration of head direction.

    PubMed

    Stringer, Simon M; Rolls, Edmund T

    2006-12-01

    A key issue is how networks in the brain learn to perform path integration, that is update a represented position using a velocity signal. Using head direction cells as an example, we show that a competitive network could self-organize to learn to respond to combinations of head direction and angular head rotation velocity. These combination cells can then be used to drive a continuous attractor network to the next head direction based on the incoming rotation signal. An associative synaptic modification rule with a short term memory trace enables preceding combination cell activity during training to be associated with the next position in the continuous attractor network. The network accounts for the presence of neurons found in the brain that respond to combinations of head direction and angular head rotation velocity. Analogous networks in the hippocampal system could self-organize to perform path integration of place and spatial view representations.

  20. Application of path-integral quantization to indistinguishable particle systems topologically confined by a magnetic field

    NASA Astrophysics Data System (ADS)

    Jacak, Janusz E.

    2018-01-01

    We demonstrate an original development of path-integral quantization in the case of a multiply connected configuration space of indistinguishable charged particles on a 2D manifold and exposed to a strong perpendicular magnetic field. The system occurs to be exceptionally homotopy-rich and the structure of the homotopy essentially depends on the magnetic field strength resulting in multiloop trajectories at specific conditions. We have proved, by a generalization of the Bohr-Sommerfeld quantization rule, that the size of a magnetic field flux quantum grows for multiloop orbits like (2 k +1 ) h/c with the number of loops k . Utilizing this property for electrons on the 2D substrate jellium, we have derived upon the path integration a complete FQHE hierarchy in excellent consistence with experiments. The path integral has been next developed to a sum over configurations, displaying various patterns of trajectory homotopies (topological configurations), which in the nonstationary case of quantum kinetics, reproduces some unclear formerly details in the longitudinal resistivity observed in experiments.

  1. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  2. Roton Excitations and the Fluid-Solid Phase Transition in Superfluid 2D Yukawa Bosons

    NASA Astrophysics Data System (ADS)

    Molinelli, S.; Galli, D. E.; Reatto, L.; Motta, M.

    2016-10-01

    We compute several ground-state properties and the dynamical structure factor of a zero-temperature system of Bosons interacting with the 2D screened Coulomb (2D-SC) potential. We resort to the exact shadow path integral ground state (SPIGS) quantum Monte Carlo method to compute the imaginary-time correlation function of the model, and to the genetic algorithm via falsification of theories (GIFT) to retrieve the dynamical structure factor. We provide a detailed comparison of ground-state properties and collective excitations of 2D-SC and ^4He atoms. The roton energy of the 2D-SC system is an increasing function of density, and not a decreasing one as in ^4He. This result is in contrast with the view that the roton is the soft mode of the fluid-solid transition. We uncover a remarkable quasi-universality of backflow and of other properties when expressed in terms of the amount of short-range order as quantified by the height of the first peak of the static structure factor.

  3. First-principles prediction of the softening of the silicon shock Hugoniot curve

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, S. X.; Militzer, B.; Collins, L. A.

    Here, whock compression of silicon (Si) under extremely high pressures (>100 Mbar) was investigated by using two first-principles methods of orbital-free molecular dynamics (OFMD) and path integral Monte Carlo (PIMC). While pressures from the two methods agree very well, PIMC predicts a second compression maximum because of 1s electron ionization that is absent in OFMD calculations since Thomas–Fermi-based theories lack inner shell structure. The Kohn–Sham density functional theory is used to calculate the equation of state (EOS) of warm dense silicon for low-pressure loadings (P < 100 Mbar). Combining these first-principles EOS results, the principal Hugoniot curve of silicon formore » pressures varying from 0.80 Mbar to above ~10 Gbar was derived. We find that silicon is ~20% or more softer than what was predicted by EOS models based on the chemical picture of matter. Existing experimental data (P ≈ 1–2 Mbar) seem to indicate this softening behavior of Si, which calls for future strong-shock experiments (P > 10 Mbar) to benchmark our results.« less

  4. Finite-temperature time-dependent variation with multiple Davydov states

    NASA Astrophysics Data System (ADS)

    Wang, Lu; Fujihashi, Yuta; Chen, Lipeng; Zhao, Yang

    2017-03-01

    The Dirac-Frenkel time-dependent variational approach with Davydov Ansätze is a sophisticated, yet efficient technique to obtain an accurate solution to many-body Schrödinger equations for energy and charge transfer dynamics in molecular aggregates and light-harvesting complexes. We extend this variational approach to finite temperature dynamics of the spin-boson model by adopting a Monte Carlo importance sampling method. In order to demonstrate the applicability of this approach, we compare calculated real-time quantum dynamics of the spin-boson model with that from numerically exact iterative quasiadiabatic propagator path integral (QUAPI) technique. The comparison shows that our variational approach with the single Davydov Ansätze is in excellent agreement with the QUAPI method at high temperatures, while the two differ at low temperatures. Accuracy in dynamics calculations employing a multitude of Davydov trial states is found to improve substantially over the single Davydov Ansatz, especially at low temperatures. At a moderate computational cost, our variational approach with the multiple Davydov Ansatz is shown to provide accurate spin-boson dynamics over a wide range of temperatures and bath spectral densities.

  5. Bond and flux-disorder effects on the superconductor-insulator transition of a honeycomb array of Josephson junctions

    NASA Astrophysics Data System (ADS)

    Granato, Enzo

    2018-05-01

    We study the effects of disorder on the zero-temperature quantum phase transition of a honeycomb array of Josephson junctions in a magnetic field with an average of fo flux quantum per plaquette. Bond disorder due to spatial variations in the Josephson couplings and magnetic flux disorder due to variations in the plaquette areas are considered. The model can describe the superconductor-insulator transition in ultra-thin films with a triangular pattern of nanoholes. Path integral Monte Carlo simulations of the equivalent (2 + 1)-dimensional classical model are used to study the critical behavior and estimate the universal resistivity at the transition. The results show that bond disorder leads to a rounding of the first-order phase transition for fo = 1 / 3 to a continuous transition. For integer fo, the decrease of the critical coupling parameter with flux disorder is significantly different from that of the same model defined on a square lattice. The results are compared with recent experimental observations on nanohole thin films with geometrical disorder and external magnetic field.

  6. Force-field functor theory: classical force-fields which reproduce equilibrium quantum distributions

    PubMed Central

    Babbush, Ryan; Parkhill, John; Aspuru-Guzik, Alán

    2013-01-01

    Feynman and Hibbs were the first to variationally determine an effective potential whose associated classical canonical ensemble approximates the exact quantum partition function. We examine the existence of a map between the local potential and an effective classical potential which matches the exact quantum equilibrium density and partition function. The usefulness of such a mapping rests in its ability to readily improve Born-Oppenheimer potentials for use with classical sampling. We show that such a map is unique and must exist. To explore the feasibility of using this result to improve classical molecular mechanics, we numerically produce a map from a library of randomly generated one-dimensional potential/effective potential pairs then evaluate its performance on independent test problems. We also apply the map to simulate liquid para-hydrogen, finding that the resulting radial pair distribution functions agree well with path integral Monte Carlo simulations. The surprising accessibility and transferability of the technique suggest a quantitative route to adapting Born-Oppenheimer potentials, with a motivation similar in spirit to the powerful ideas and approximations of density functional theory. PMID:24790954

  7. First-principles prediction of the softening of the silicon shock Hugoniot curve

    DOE PAGES

    Hu, S. X.; Militzer, B.; Collins, L. A.; ...

    2016-09-15

    Here, whock compression of silicon (Si) under extremely high pressures (>100 Mbar) was investigated by using two first-principles methods of orbital-free molecular dynamics (OFMD) and path integral Monte Carlo (PIMC). While pressures from the two methods agree very well, PIMC predicts a second compression maximum because of 1s electron ionization that is absent in OFMD calculations since Thomas–Fermi-based theories lack inner shell structure. The Kohn–Sham density functional theory is used to calculate the equation of state (EOS) of warm dense silicon for low-pressure loadings (P < 100 Mbar). Combining these first-principles EOS results, the principal Hugoniot curve of silicon formore » pressures varying from 0.80 Mbar to above ~10 Gbar was derived. We find that silicon is ~20% or more softer than what was predicted by EOS models based on the chemical picture of matter. Existing experimental data (P ≈ 1–2 Mbar) seem to indicate this softening behavior of Si, which calls for future strong-shock experiments (P > 10 Mbar) to benchmark our results.« less

  8. Rigorous investigation of the reduced density matrix for the ideal Bose gas in harmonic traps by a loop-gas-like approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beau, Mathieu, E-mail: mbeau@stp.dias.ie; Savoie, Baptiste, E-mail: baptiste.savoie@gmail.com

    2014-05-15

    In this paper, we rigorously investigate the reduced density matrix (RDM) associated to the ideal Bose gas in harmonic traps. We present a method based on a sum-decomposition of the RDM allowing to treat not only the isotropic trap, but also general anisotropic traps. When focusing on the isotropic trap, the method is analogous to the loop-gas approach developed by Mullin [“The loop-gas approach to Bose-Einstein condensation for trapped particles,” Am. J. Phys. 68(2), 120 (2000)]. Turning to the case of anisotropic traps, we examine the RDM for some anisotropic trap models corresponding to some quasi-1D and quasi-2D regimes. Formore » such models, we bring out an additional contribution in the local density of particles which arises from the mesoscopic loops. The close connection with the occurrence of generalized-Bose-Einstein condensation is discussed. Our loop-gas-like approach provides relevant information which can help guide numerical investigations on highly anisotropic systems based on the Path Integral Monte Carlo method.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filinov, A.V.; Golubnychiy, V.O.; Bonitz, M.

    Extending our previous work [A.V. Filinov et al., J. Phys. A 36, 5957 (2003)], we present a detailed discussion of accuracy and practical applications of finite-temperature pseudopotentials for two-component Coulomb systems. Different pseudopotentials are discussed: (i) the diagonal Kelbg potential, (ii) the off-diagonal Kelbg potential, (iii) the improved diagonal Kelbg potential, (iv) an effective potential obtained with the Feynman-Kleinert variational principle, and (v) the 'exact' quantum pair potential derived from the two-particle density matrix. For the improved diagonal Kelbg potential, a simple temperature-dependent fit is derived which accurately reproduces the 'exact' pair potential in the whole temperature range. The derivedmore » pseudopotentials are then used in path integral Monte Carlo and molecular-dynamics (MD) simulations to obtain thermodynamical properties of strongly coupled hydrogen. It is demonstrated that classical MD simulations with spin-dependent interaction potentials for the electrons allow for an accurate description of the internal energy of hydrogen in the difficult regime of partial ionization down to the temperatures of about 60 000 K. Finally, we point out an interesting relationship between the quantum potentials and the effective potentials used in density-functional theory.« less

  10. Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions

    DOE PAGES

    Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.

    2018-02-14

    The potential for density functional calculations to predict the properties of matter under extreme conditions depends crucially upon having a non-empirical approximate free energy functional valid over a wide range of state conditions. Unlike the ground-state case, no such free-energy exchange- correlation (XC) functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Application in Kohn-Sham calculations for hot electrons inmore » a static fcc Aluminum lattice demon- strates the combined magnitude of thermal and gradient effects handled by this functional. Its accuracy in the increasingly important warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated temperatures and by low density Al calculations over a wide T range.« less

  11. Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.

    The potential for density functional calculations to predict the properties of matter under extreme conditions depends crucially upon having a non-empirical approximate free energy functional valid over a wide range of state conditions. Unlike the ground-state case, no such free-energy exchange- correlation (XC) functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Application in Kohn-Sham calculations for hot electrons inmore » a static fcc Aluminum lattice demon- strates the combined magnitude of thermal and gradient effects handled by this functional. Its accuracy in the increasingly important warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated temperatures and by low density Al calculations over a wide T range.« less

  12. By-passing the sign-problem in Fermion Path Integral Monte Carlo simulations by use of high-order propagators

    NASA Astrophysics Data System (ADS)

    Chin, Siu A.

    2014-03-01

    The sign-problem in PIMC simulations of non-relativistic fermions increases in serverity with the number of fermions and the number of beads (or time-slices) of the simulation. A large of number of beads is usually needed, because the conventional primitive propagator is only second-order and the usual thermodynamic energy-estimator converges very slowly from below with the total imaginary time. The Hamiltonian energy-estimator, while more complicated to evaluate, is a variational upper-bound and converges much faster with the total imaginary time, thereby requiring fewer beads. This work shows that when the Hamiltonian estimator is used in conjunction with fourth-order propagators with optimizable parameters, the ground state energies of 2D parabolic quantum-dots with approximately 10 completely polarized electrons can be obtain with ONLY 3-5 beads, before the onset of severe sign problems. This work was made possible by NPRP GRANT #5-674-1-114 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the author.

  13. Asymmetric Top Rotors in Superfluid Para-Hydrogen Nano-Clusters

    NASA Astrophysics Data System (ADS)

    Zeng, Tao; Li, Hui; Roy, Pierre-Nicholas

    2012-06-01

    We present the first simulation study of bosonic clusters doped with an asymmetric top molecule. A variation of the path-integral Monte Carlo method is developed to study a para-water (pH_2O) impurity in para-hydrogen (pH_2) clusters. The growth pattern of the doped clusters is similar in nature to that of the pure clusters. The pH_2O molecule appears to rotate freely in the cluster due to its large rotational constants and the lack of adiabatic following. The presence of pH_2O substantially quenches the superfluid response of pH_2 with respect to the space fixed frame. We also study the behaviour of a sulphur dioxide (32S16O_2) dopant in the pH_2 clusters. For such a heavy rotor, the adiabatic following of the pH_2 molecules is established and the superfluid renormalization of the rotational constants is observed. The rotational structure of the SO_2-p(H_2)_N clusters' ro-vibrational spectra is predicted. The connection between the superfluid response respect to the external boundary rotation and the dopant rotation is discussed.

  14. Random gauge models of the superconductor-insulator transition in two-dimensional disordered superconductors

    NASA Astrophysics Data System (ADS)

    Granato, Enzo

    2017-11-01

    We study numerically the superconductor-insulator transition in two-dimensional inhomogeneous superconductors with gauge disorder, described by four different quantum rotor models: a gauge glass, a flux glass, a binary phase glass, and a Gaussian phase glass. The first two models describe the combined effect of geometrical disorder in the array of local superconducting islands and a uniform external magnetic field, while the last two describe the effects of random negative Josephson-junction couplings or π junctions. Monte Carlo simulations in the path-integral representation of the models are used to determine the critical exponents and the universal conductivity at the quantum phase transition. The gauge- and flux-glass models display the same critical behavior, within the estimated numerical uncertainties. Similar agreement is found for the binary and Gaussian phase-glass models. Despite the different symmetries and disorder correlations, we find that the universal conductivity of these models is approximately the same. In particular, the ratio of this value to that of the pure model agrees with recent experiments on nanohole thin-film superconductors in a magnetic field, in the large disorder limit.

  15. PathJam: a new service for integrating biological pathway information.

    PubMed

    Glez-Peña, Daniel; Reboiro-Jato, Miguel; Domínguez, Rubén; Gómez-López, Gonzalo; Pisano, David G; Fdez-Riverola, Florentino

    2010-10-28

    Biological pathways are crucial to much of the scientific research today including the study of specific biological processes related with human diseases. PathJam is a new comprehensive and freely accessible web-server application integrating scattered human pathway annotation from several public sources. The tool has been designed for both (i) being intuitive for wet-lab users providing statistical enrichment analysis of pathway annotations and (ii) giving support to the development of new integrative pathway applications. PathJam’s unique features and advantages include interactive graphs linking pathways and genes of interest, downloadable results in fully compatible formats, GSEA compatible output files and a standardized RESTful API.

  16. Integration of Monte-Carlo ray tracing with a stochastic optimisation method: application to the design of solar receiver geometry.

    PubMed

    Asselineau, Charles-Alexis; Zapata, Jose; Pye, John

    2015-06-01

    A stochastic optimisation method adapted to illumination and radiative heat transfer problems involving Monte-Carlo ray-tracing is presented. A solar receiver shape optimisation case study illustrates the advantages of the method and its potential: efficient receivers are identified using a moderate computational cost.

  17. Integrative Families and Systems Treatment: A Middle Path toward Integrating Common and Specific Factors in Evidence-Based Family Therapy

    ERIC Educational Resources Information Center

    Fraser, J. Scott; Solovey, Andrew D.; Grove, David; Lee, Mo Yee; Greene, Gilbert J.

    2012-01-01

    A moderate common factors approach is proposed as a synthesis or middle path to integrate common and specific factors in evidence-based approaches to high-risk youth and families. The debate in family therapy between common and specific factors camps is reviewed and followed by suggestions from the literature for synthesis and creative flexibility…

  18. Rotational excitations of N2O in small helium clusters and the role of Bose permutation symmetry

    NASA Astrophysics Data System (ADS)

    Paesani, F.; Whaley, K. B.

    2004-09-01

    We present a detailed study of the energetics, structures, and Bose properties of small clusters of 4He containing a single nitrous oxide (N2O) molecule, from N=1 4He up to sizes corresponding to completion of the first solvation shell around N2O (N=16 4He). Ground state properties are calculated using the importance-sampled rigid-body diffusion Monte Carlo method, rotational excited state calculations are made with the projection operator imaginary time spectral evolution method, and Bose permutation exchange and associated superfluid properties are calculated with the finite temperature path integral method. For N⩽5 the helium atoms are seen to form an equatorial ring around the molecular axis, at N=6 helium density starts to occupy the second (local) minimum of the N2O-He interaction at the oxygen side of the molecule, and N=9 is the critical size at which there is onset of helium solvation all along the molecular axis. For N⩾8 six 4He atoms are distributed in a symmetric, quasirigid ring around N2O. Path integral calculations show essentially complete superfluid response to rotation about the molecular axis for N⩾5, and a rise of the perpendicular superfluid response from zero to appreciable values for N⩾8. Rotational excited states are computed for three values of the total angular momentum, J=1-3, and the energy levels fitted to obtain effective spectroscopic constants that show excellent agreement with the experimentally observed N dependence of the effective rotational constant Beff. The non-monotonic behavior of the rotational constant is seen to be due to the onset of long 4He permutation exchanges and associated perpendicular superfluid response of the clusters for N⩾8. We provide a detailed analysis of the role of the helium solvation structure and superfluid properties in determining the effective rotational constants.

  19. Combining Offline and Online Computation for Solving Partially Observable Markov Decision Process

    DTIC Science & Technology

    2015-03-06

    David Hsu and Wee Sun Lee, Monte Carlo Bayesian Reinforcement Learning, International Conference on Machine Learning (ICML), 2012. • Haoyu Bai, David...and Automation (ICRA), 2015. • Zhan Wei Lim, David Hsu, and Wee Sun Lee, Adaptive Informative Path Planning in Metric Spaces. Submitted to Int. J... Automation (ICRA), 2015. 2. Bai, H., Hsu, D., Kochenderfer, M. J., and Lee, W. S., Unmanned aircraft collision avoidance using continuous state POMDPs

  20. Iterative blip-summed path integral for quantum dynamics in strongly dissipative environments

    NASA Astrophysics Data System (ADS)

    Makri, Nancy

    2017-04-01

    The iterative decomposition of the blip-summed path integral [N. Makri, J. Chem. Phys. 141, 134117 (2014)] is described. The starting point is the expression of the reduced density matrix for a quantum system interacting with a harmonic dissipative bath in the form of a forward-backward path sum, where the effects of the bath enter through the Feynman-Vernon influence functional. The path sum is evaluated iteratively in time by propagating an array that stores blip configurations within the memory interval. Convergence with respect to the number of blips and the memory length yields numerically exact results which are free of statistical error. In situations of strongly dissipative, sluggish baths, the algorithm leads to a dramatic reduction of computational effort in comparison with iterative path integral methods that do not implement the blip decomposition. This gain in efficiency arises from (i) the rapid convergence of the blip series and (ii) circumventing the explicit enumeration of between-blip path segments, whose number grows exponentially with the memory length. Application to an asymmetric dissipative two-level system illustrates the rapid convergence of the algorithm even when the bath memory is extremely long.

  1. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  2. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2014-12-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOC's) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using 3 or 4 points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  3. SIMULATION STUDY FOR GASEOUS FLUXES FROM AN AREA SOURCE USING COMPUTED TOMOGRAPHY AND OPTICAL REMOTE SENSING

    EPA Science Inventory

    The paper presents a new approach to quantifying emissions from fugitive gaseous air pollution sources. Computed tomography (CT) and path-integrated optical remote sensing (PI-ORS) concentration data are combined in a new field beam geometry. Path-integrated concentrations are ...

  4. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  5. Piloting Systems Reset Path Integration Systems during Position Estimation

    ERIC Educational Resources Information Center

    Zhang, Lei; Mou, Weimin

    2017-01-01

    During locomotion, individuals can determine their positions with either idiothetic cues from movement (path integration systems) or visual landmarks (piloting systems). This project investigated how these 2 systems interact in determining humans' positions. In 2 experiments, participants studied the locations of 5 target objects and 1 single…

  6. Path-integral invariants in abelian Chern-Simons theory

    NASA Astrophysics Data System (ADS)

    Guadagnini, E.; Thuillier, F.

    2014-05-01

    We consider the U(1) Chern-Simons gauge theory defined in a general closed oriented 3-manifold M; the functional integration is used to compute the normalized partition function and the expectation values of the link holonomies. The non-perturbative path-integral is defined in the space of the gauge orbits of the connections which belong to the various inequivalent U(1) principal bundles over M; the different sectors of configuration space are labelled by the elements of the first homology group of M and are characterized by appropriate background connections. The gauge orbits of flat connections, whose classification is also based on the homology group, control the non-perturbative contributions to the mean values. The functional integration is carried out in any 3-manifold M, and the corresponding path-integral invariants turn out to be strictly related with the abelian Reshetikhin-Turaev surgery invariants.

  7. Hybrid transport and diffusion modeling using electron thermal transport Monte Carlo SNB in DRACO

    NASA Astrophysics Data System (ADS)

    Chenhall, Jeffrey; Moses, Gregory

    2017-10-01

    The iSNB (implicit Schurtz Nicolai Busquet) multigroup diffusion electron thermal transport method is adapted into an Electron Thermal Transport Monte Carlo (ETTMC) transport method to better model angular and long mean free path non-local effects. Previously, the ETTMC model had been implemented in the 2D DRACO multiphysics code and found to produce consistent results with the iSNB method. Current work is focused on a hybridization of the computationally slower but higher fidelity ETTMC transport method with the computationally faster iSNB diffusion method in order to maximize computational efficiency. Furthermore, effects on the energy distribution of the heat flux divergence are studied. Work to date on the hybrid method will be presented. This work was supported by Sandia National Laboratories and the Univ. of Rochester Laboratory for Laser Energetics.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Liborio I., E-mail: liborio78@gmail.com

    A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Montemore » Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.« less

  9. How important is exact knowledge of preferential flowpath locations and orientations for understanding spatiotemporally integrated spring hydrologic and transport response?

    NASA Astrophysics Data System (ADS)

    Henson, W.; De Rooij, R.; Graham, W. D.

    2016-12-01

    The Upper Floridian Aquifer is hydrogeologically complex; limestone dissolution has led to vertical and horizontal preferential flow paths. Locations of karst conduits are unknown and conduit properties are poorly constrained. Uncertainty in effects of conduit location, size, and density, network geometry and connectivity on hydrologic and transport responses is not well quantified, leading to limited use of discrete-continuum models that incorporate conduit networks for regional-scale hydrologic regulatory models. However, conduit networks typically dominate flow and contaminant transport in karst aquifers. We evaluated sensitivity of simulated water and nitrate fluxes and flow paths to karst conduit geometry in a springshed representative of Silver Springs, Florida, using a novel calcite dissolution conduit-generation algorithm coupled with a discrete-continuum flow and transport model (DisCo). Monte Carlo simulations of conduit generation, groundwater flow, and conservative solute transport indicate that, if a first magnitude spring system conduit network developed (i.e., spring flow >2.8 m3/s), the uncertainty in hydraulic and solute pulse response metrics at the spring vent was minimally related to locational uncertainty of network elements. Across the ensemble of realizations for various distributions of conduits, first magnitude spring hydraulic pulse metrics (e.g., steady-flow, peak flow, and recession coefficients) had < 0.01 coefficient of variation (CV). Similarly, spring solute breakthrough curve moments had low CV (<0.08); peak arrival had CV=0.06, mean travel time had CV=0.05, and travel time standard deviation had CV=0.08. Nevertheless, hydraulic and solute pulse response metrics were significantly different than those predicted by an equivalent porous-media model. These findings indicate that regional-scale decision models that incorporate karst preferential flow paths within an uncertainty framework can be used to better constrain aquifer-vulnerability estimates, despite lacking information about actual conduit locations.

  10. A new navigational mechanism mediated by ant ocelli.

    PubMed

    Schwarz, Sebastian; Wystrach, Antoine; Cheng, Ken

    2011-12-23

    Many animals rely on path integration for navigation and desert ants are the champions. On leaving the nest, ants continuously integrate their distance and direction of travel so that they always know their current distance and direction from the nest and can take a direct path to home. Distance information originates from a step-counter and directional information is based on a celestial compass. So far, it has been assumed that the directional information obtained from ocelli contribute to a single global path integrator, together with directional information from the dorsal rim area (DRA) of the compound eyes and distance information from the step-counter. Here, we show that ocelli mediate a distinct compass from that mediated by the compound eyes. After travelling a two-leg outbound route, untreated foragers headed towards the nest direction, showing that both legs of the route had been integrated. In contrast, foragers with covered compound eyes but uncovered ocelli steered in the direction opposite to the last leg of the outbound route. Our findings suggest that, unlike the DRA, ocelli cannot by themselves mediate path integration. Instead, ocelli mediate a distinct directional system, which buffers the most recent leg of a journey.

  11. A Hierarchical Approach to Fracture Mechanics

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Taasan, Shlomo

    2004-01-01

    Recent research conducted under NASA LaRC's Creativity and Innovation Program has led to the development of an initial approach for a hierarchical fracture mechanics. This methodology unites failure mechanisms occurring at different length scales and provides a framework for a physics-based theory of fracture. At the nanoscale, parametric molecular dynamic simulations are used to compute the energy associated with atomic level failure mechanisms. This information is used in a mesoscale percolation model of defect coalescence to obtain statistics of fracture paths and energies through Monte Carlo simulations. The mathematical structure of predicted crack paths is described using concepts of fractal geometry. The non-integer fractal dimension relates geometric and energy measures between meso- and macroscales. For illustration, a fractal-based continuum strain energy release rate is derived for inter- and transgranular fracture in polycrystalline metals.

  12. A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments

    Treesearch

    S. Healey; P. Patterson; S. Urbanski

    2014-01-01

    Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...

  13. Bayesian estimation of realized stochastic volatility model by Hybrid Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2014-03-01

    The hybrid Monte Carlo algorithm (HMCA) is applied for Bayesian parameter estimation of the realized stochastic volatility (RSV) model. Using the 2nd order minimum norm integrator (2MNI) for the molecular dynamics (MD) simulation in the HMCA, we find that the 2MNI is more efficient than the conventional leapfrog integrator. We also find that the autocorrelation time of the volatility variables sampled by the HMCA is very short. Thus it is concluded that the HMCA with the 2MNI is an efficient algorithm for parameter estimations of the RSV model.

  14. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  15. On the Path Integral in Non-Commutative (nc) Qft

    NASA Astrophysics Data System (ADS)

    Dehne, Christoph

    2008-09-01

    As is generally known, different quantization schemes applied to field theory on NC spacetime lead to Feynman rules with different physical properties, if time does not commute with space. In particular, the Feynman rules that are derived from the path integral corresponding to the T*-product (the so-called naïve Feynman rules) violate the causal time ordering property. Within the Hamiltonian approach to quantum field theory, we show that we can (formally) modify the time ordering encoded in the above path integral. The resulting Feynman rules are identical to those obtained in the canonical approach via the Gell-Mann-Low formula (with T-ordering). They preserve thus unitarity and causal time ordering.

  16. Tunable quantum interference in a 3D integrated circuit.

    PubMed

    Chaboyer, Zachary; Meany, Thomas; Helt, L G; Withford, Michael J; Steel, M J

    2015-04-27

    Integrated photonics promises solutions to questions of stability, complexity, and size in quantum optics. Advances in tunable and non-planar integrated platforms, such as laser-inscribed photonics, continue to bring the realisation of quantum advantages in computation and metrology ever closer, perhaps most easily seen in multi-path interferometry. Here we demonstrate control of two-photon interference in a chip-scale 3D multi-path interferometer, showing a reduced periodicity and enhanced visibility compared to single photon measurements. Observed non-classical visibilities are widely tunable, and explained well by theoretical predictions based on classical measurements. With these predictions we extract Fisher information approaching a theoretical maximum. Our results open a path to quantum enhanced phase measurements.

  17. Path integral measure, constraints and ghosts for massive gravitons with a cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metaxas, Dimitrios

    2009-12-15

    For massive gravity in a de Sitter background one encounters problems of stability when the curvature is larger than the graviton mass. I analyze this situation from the path integral point of view and show that it is related to the conformal factor problem of Euclidean quantum (massless) gravity. When a constraint for massive gravity is incorporated and the proper treatment of the path integral measure is taken into account one finds that, for particular choices of the DeWitt metric on the space of metrics (in fact, the same choices as in the massless case), one obtains the opposite boundmore » on the graviton mass.« less

  18. Note: A portable Raman analyzer for microfluidic chips based on a dichroic beam splitter for integration of imaging and signal collection light paths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Yijia; Xu, Shuping; Xu, Weiqing, E-mail: xuwq@jlu.edu.cn

    An integrated and portable Raman analyzer featuring an inverted probe fixed on a motor-driving adjustable optical module was designed for the combination of a microfluidic system. It possesses a micro-imaging function. The inverted configuration is advantageous to locate and focus microfluidic channels. Different from commercial micro-imaging Raman spectrometers using manual switchable light path, this analyzer adopts a dichroic beam splitter for both imaging and signal collection light paths, which avoids movable parts and improves the integration and stability of optics. Combined with surface-enhanced Raman scattering technique, this portable Raman micro-analyzer is promising as a powerful tool for microfluidic analytics.

  19. Blip decomposition of the path integral: exponential acceleration of real-time calculations on quantum dissipative systems.

    PubMed

    Makri, Nancy

    2014-10-07

    The real-time path integral representation of the reduced density matrix for a discrete system in contact with a dissipative medium is rewritten in terms of the number of blips, i.e., elementary time intervals over which the forward and backward paths are not identical. For a given set of blips, it is shown that the path sum with respect to the coordinates of all remaining time points is isomorphic to that for the wavefunction of a system subject to an external driving term and thus can be summed by an inexpensive iterative procedure. This exact decomposition reduces the number of terms by a factor that increases exponentially with propagation time. Further, under conditions (moderately high temperature and/or dissipation strength) that lead primarily to incoherent dynamics, the "fully incoherent limit" zero-blip term of the series provides a reasonable approximation to the dynamics, and the blip series converges rapidly to the exact result. Retention of only the blips required for satisfactory convergence leads to speedup of full-memory path integral calculations by many orders of magnitude.

  20. A Note on the Stochastic Nature of Feynman Quantum Paths

    NASA Astrophysics Data System (ADS)

    Botelho, Luiz C. L.

    2016-11-01

    We propose a Fresnel stochastic white noise framework to analyze the stochastic nature of the Feynman paths entering on the Feynman Path Integral expression for the Feynman Propagator of a particle quantum mechanically moving under a time-independent potential.

  1. Bayes factors for the linear ballistic accumulator model of decision-making.

    PubMed

    Evans, Nathan J; Brown, Scott D

    2018-04-01

    Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.

  2. PLANE-INTEGRATED OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROMETRY METHODOLOGY FOR ANAEROBIC SWINE LAGOON EMISSION MEASUREMENTS

    EPA Science Inventory

    Emissions of ammonia and methane from an anaerobic lagoon at a swine animal feeding operation were evaluated five times over a period of two years. The plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) methodology was used to transect the plume at ...

  3. Integration of Technology into the Classroom: Case Studies.

    ERIC Educational Resources Information Center

    Johnson, D. LaMont, Ed.; Maddux, Cleborne D., Ed.; Liu, Leping, Ed.

    This book contains the following case studies on the integration of technology in education: (1) "First Steps toward a Statistically Generated Information Technology Integration Model" (D. LaMont Johnson and Leping Liu); (2) "Case Studies: Are We Rejecting Rigor or Rediscovering Richness?" (Cleborne D. Maddux); (3)…

  4. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: A path for the optimization of low-energy many-body bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reboredo, Fernando A.; Kim, Jeongnim

    A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspacemore » of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.« less

  5. A study of electron transfer using a three-level system coupled to an ohmic bath

    NASA Technical Reports Server (NTRS)

    Takasu, Masako; Chandler, David

    1993-01-01

    Electron transfer is studied using a multi-level system coupled to a bosonic bath. Two body correlation functions are obtained using both exact enumeration of spin paths and Monte Carlo simulation. It was found that the phase boundary for the coherent-incoherent transition lies at a smaller friction in the asymmetric two-level model than in the symmetric two-level model. A similar coherent-incoherent transition is observed for three-level system.

  6. The Mathematics of Mixing Things Up

    NASA Astrophysics Data System (ADS)

    Diaconis, Persi

    2011-08-01

    How long should a Markov chain Monte Carlo algorithm be run? Using examples from statistical physics (Ehrenfest urn, Ising model, hard discs) as well as card shuffling, this tutorial paper gives an overview of a body of mathematical results that can give useful answers to practitioners (viz: seven shuffles suffice for practical purposes). It points to new techniques (path coupling, geometric inequalities, and Harris recurrence). The discovery of phase transitions in mixing times (the cutoff phenomenon) is emphasized.

  7. Low Variance Couplings for Stochastic Models of Intracellular Processes with Time-Dependent Rate Functions.

    PubMed

    Anderson, David F; Yuan, Chaojie

    2018-04-18

    A number of coupling strategies are presented for stochastically modeled biochemical processes with time-dependent parameters. In particular, the stacked coupling is introduced and is shown via a number of examples to provide an exceptionally low variance between the generated paths. This coupling will be useful in the numerical computation of parametric sensitivities and the fast estimation of expectations via multilevel Monte Carlo methods. We provide the requisite estimators in both cases.

  8. Tackling higher derivative ghosts with the Euclidean path integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less

  9. Metal-Insulator Transition in Nanoparticle Solids: Insights from Kinetic Monte Carlo Simulations

    DOE PAGES

    Qu, Luman; Vörös, Márton; Zimanyi, Gergely T.

    2017-08-01

    Progress has been rapid in increasing the efficiency of energy conversion in nanoparticles. However, extraction of the photo-generated charge carriers remains challenging. Encouragingly, the charge mobility has been improved recently by driving nanoparticle (NP) films across the metal-insulator transition (MIT). To simulate MIT in NP films, we developed a hierarchical Kinetic Monte Carlo transport model. Electrons transfer between neighboring NPs via activated hopping when the NP energies differ by more than an overlap energy, but transfer by a non-activated quantum delocalization, if the NP energies are closer than the overlap energy. As the overlap energy increases, emerging percolating clusters supportmore » a metallic transport across the entire film. We simulated the evolution of the temperature-dependent electron mobility. We analyzed our data in terms of two candidate models of the MIT: (a) as a Quantum Critical Transition, signaled by an effective gap going to zero; and (b) as a Quantum Percolation Transition, where a sample-spanning metallic percolation path is formed as the fraction of the hopping bonds in the transport paths is going to zero. We found that the Quantum Percolation Transition theory provides a better description of the MIT. We also observed an anomalously low gap region next to the MIT. We discuss the relevance of our results in the light of recent experimental measurements.« less

  10. Variance-reduced simulation of lattice discrete-time Markov chains with applications in reaction networks

    NASA Astrophysics Data System (ADS)

    Maginnis, P. A.; West, M.; Dullerud, G. E.

    2016-10-01

    We propose an algorithm to accelerate Monte Carlo simulation for a broad class of stochastic processes. Specifically, the class of countable-state, discrete-time Markov chains driven by additive Poisson noise, or lattice discrete-time Markov chains. In particular, this class includes simulation of reaction networks via the tau-leaping algorithm. To produce the speedup, we simulate pairs of fair-draw trajectories that are negatively correlated. Thus, when averaged, these paths produce an unbiased Monte Carlo estimator that has reduced variance and, therefore, reduced error. Numerical results for three example systems included in this work demonstrate two to four orders of magnitude reduction of mean-square error. The numerical examples were chosen to illustrate different application areas and levels of system complexity. The areas are: gene expression (affine state-dependent rates), aerosol particle coagulation with emission and human immunodeficiency virus infection (both with nonlinear state-dependent rates). Our algorithm views the system dynamics as a ;black-box;, i.e., we only require control of pseudorandom number generator inputs. As a result, typical codes can be retrofitted with our algorithm using only minor changes. We prove several analytical results. Among these, we characterize the relationship of covariances between paths in the general nonlinear state-dependent intensity rates case, and we prove variance reduction of mean estimators in the special case of affine intensity rates.

  11. Metal-Insulator Transition in Nanoparticle Solids: Insights from Kinetic Monte Carlo Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu, Luman; Vörös, Márton; Zimanyi, Gergely T.

    Progress has been rapid in increasing the efficiency of energy conversion in nanoparticles. However, extraction of the photo-generated charge carriers remains challenging. Encouragingly, the charge mobility has been improved recently by driving nanoparticle (NP) films across the metal-insulator transition (MIT). To simulate MIT in NP films, we developed a hierarchical Kinetic Monte Carlo transport model. Electrons transfer between neighboring NPs via activated hopping when the NP energies differ by more than an overlap energy, but transfer by a non-activated quantum delocalization, if the NP energies are closer than the overlap energy. As the overlap energy increases, emerging percolating clusters supportmore » a metallic transport across the entire film. We simulated the evolution of the temperature-dependent electron mobility. We analyzed our data in terms of two candidate models of the MIT: (a) as a Quantum Critical Transition, signaled by an effective gap going to zero; and (b) as a Quantum Percolation Transition, where a sample-spanning metallic percolation path is formed as the fraction of the hopping bonds in the transport paths is going to zero. We found that the Quantum Percolation Transition theory provides a better description of the MIT. We also observed an anomalously low gap region next to the MIT. We discuss the relevance of our results in the light of recent experimental measurements.« less

  12. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  13. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens.

    PubMed

    Zhou, Hufeng; Jin, Jingjing; Zhang, Haojun; Yi, Bo; Wozniak, Michal; Wong, Limsoon

    2012-01-01

    Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath.

  14. Green function of the double-fractional Fokker-Planck equation: path integral and stochastic differential equations.

    PubMed

    Kleinert, H; Zatloukal, V

    2013-11-01

    The statistics of rare events, the so-called black-swan events, is governed by non-Gaussian distributions with heavy power-like tails. We calculate the Green functions of the associated Fokker-Planck equations and solve the related stochastic differential equations. We also discuss the subject in the framework of path integration.

  15. Low-coherence interferometric sensor system utilizing an integrated optics configuration

    NASA Astrophysics Data System (ADS)

    Plissi, M. V.; Rogers, A. J.; Brassington, D. J.; Wilson, M. G. F.

    1995-08-01

    The implementation of a twin Mach-Zehnder reference interferometer in an integrated optics substrate is described. From measurements of the fringe visibilities, an identification of the fringe order is attempted as a way to provide an absolute sensor for any parameter capable of modifying the difference in path length between two interfering optical paths.

  16. Explaining Technology Integration in K-12 Classrooms: A Multilevel Path Analysis Model

    ERIC Educational Resources Information Center

    Liu, Feng; Ritzhaupt, Albert D.; Dawson, Kara; Barron, Ann E.

    2017-01-01

    The purpose of this research was to design and test a model of classroom technology integration in the context of K-12 schools. The proposed multilevel path analysis model includes teacher, contextual, and school related variables on a teacher's use of technology and confidence and comfort using technology as mediators of classroom technology…

  17. Path integral learning of multidimensional movement trajectories

    NASA Astrophysics Data System (ADS)

    André, João; Santos, Cristina; Costa, Lino

    2013-10-01

    This paper explores the use of Path Integral Methods, particularly several variants of the recent Path Integral Policy Improvement (PI2) algorithm in multidimensional movement parametrized policy learning. We rely on Dynamic Movement Primitives (DMPs) to codify discrete and rhythmic trajectories, and apply the PI2-CMA and PIBB methods in the learning of optimal policy parameters, according to different cost functions that inherently encode movement objectives. Additionally we merge both of these variants and propose the PIBB-CMA algorithm, comparing all of them with the vanilla version of PI2. From the obtained results we conclude that PIBB-CMA surpasses all other methods in terms of convergence speed and iterative final cost, which leads to an increased interest in its application to more complex robotic problems.

  18. Spin Path Integrals and Generations

    NASA Astrophysics Data System (ADS)

    Brannen, Carl

    2010-11-01

    The spin of a free electron is stable but its position is not. Recent quantum information research by G. Svetlichny, J. Tolar, and G. Chadzitaskos have shown that the Feynman position path integral can be mathematically defined as a product of incompatible states; that is, as a product of mutually unbiased bases (MUBs). Since the more common use of MUBs is in finite dimensional Hilbert spaces, this raises the question “what happens when spin path integrals are computed over products of MUBs?” Such an assumption makes spin no longer stable. We show that the usual spin-1/2 is obtained in the long-time limit in three orthogonal solutions that we associate with the three elementary particle generations. We give applications to the masses of the elementary leptons.

  19. Spin-resolved correlations in the warm-dense homogeneous electron gas

    NASA Astrophysics Data System (ADS)

    Arora, Priya; Kumar, Krishan; Moudgil, R. K.

    2017-04-01

    We have studied spin-resolved correlations in the warm-dense homogeneous electron gas by determining the linear density and spin-density response functions, within the dynamical self-consistent mean-field theory of Singwi et al. The calculated spin-resolved pair-correlation function gσσ'(r) is compared with the recent restricted path-integral Monte Carlo (RPIMC) simulations due to Brown et al. [Phys. Rev. Lett. 110, 146405 (2013)], while interaction energy Eint and exchange-correlation free energy Fxc with the RPIMC and very recent ab initio quantum Monte Carlo (QMC) simulations by Dornheim et al. [Phys. Rev. Lett. 117, 156403 (2016)]. g↑↓(r) is found to be in good agreement with the RPIMC data, while a mismatch is seen in g↑↑(r) at small r where it becomes somewhat negative. As an interesting result, it is deduced that a non-monotonic T-dependence of g(0) is driven primarily by g↑↓(0). Our results of Eint and Fxc exhibit an excellent agreement with the QMC study due to Dornheim et al., which deals with the finite-size correction quite accurately. We observe, however, a visible deviation of Eint from the RPIMC data for high densities ( 8% at rs = 1). Further, we have extended our study to the fully spin-polarized phase. Again, with the exception of high density region, we find a good agreement of Eint with the RPIMC data. This points to the need of settling the problem of finite-size correction in the spin-polarized phase also. Interestingly, we also find that the thermal effects tend to oppose spatial localization as well as spin polarization of electrons. Supplementary material in the form of one zip file available from the Journal web page at http://https://doi.org/10.1140/epjb/e2017-70532-y

  20. The role of spatial memory and frames of reference in the precision of angular path integration.

    PubMed

    Arthur, Joeanna C; Philbeck, John W; Kleene, Nicholas J; Chichka, David

    2012-09-01

    Angular path integration refers to the ability to maintain an estimate of self-location after a rotational displacement by integrating internally-generated (idiothetic) self-motion signals over time. Previous work has found that non-sensory inputs, namely spatial memory, can play a powerful role in angular path integration (Arthur et al., 2007, 2009). Here we investigated the conditions under which spatial memory facilitates angular path integration. We hypothesized that the benefit of spatial memory is particularly likely in spatial updating tasks in which one's self-location estimate is referenced to external space. To test this idea, we administered passive, non-visual body rotations (ranging 40°-140°) about the yaw axis and asked participants to use verbal reports or open-loop manual pointing to indicate the magnitude of the rotation. Prior to some trials, previews of the surrounding environment were given. We found that when participants adopted an egocentric frame of reference, the previously-observed benefit of previews on within-subject response precision was not manifested, regardless of whether remembered spatial frameworks were derived from vision or spatial language. We conclude that the powerful effect of spatial memory is dependent on one's frame of reference during self-motion updating. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uhl, Felix, E-mail: felix.uhl@rub.de; Marx, Dominik; Ceriotti, Michele

    2016-08-07

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated pathmore » integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH{sub 5}{sup +}. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH{sub 4} to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.« less

  2. Path integration guided with a quality map for shape reconstruction in the fringe reflection technique

    NASA Astrophysics Data System (ADS)

    Jing, Xiaoli; Cheng, Haobo; Wen, Yongfu

    2018-04-01

    A new local integration algorithm called quality map path integration (QMPI) is reported for shape reconstruction in the fringe reflection technique. A quality map is proposed to evaluate the quality of gradient data locally, and functions as a guideline for the integrated path. The presented method can be employed in wavefront estimation from its slopes over the general shaped surface with slope noise equivalent to that in practical measurements. Moreover, QMPI is much better at handling the slope data with local noise, which may be caused by the irregular shapes of the surface under test. The performance of QMPI is discussed by simulations and experiment. It is shown that QMPI not only improves the accuracy of local integration, but can also be easily implemented with no iteration compared to Southwell zonal reconstruction (SZR). From an engineering point-of-view, the proposed method may also provide an efficient and stable approach for different shapes with high-precise demand.

  3. Kinetic Monte Carlo Simulation of Cation Diffusion in Low-K Ceramics

    NASA Technical Reports Server (NTRS)

    Good, Brian

    2013-01-01

    Low thermal conductivity (low-K) ceramic materials are of interest to the aerospace community for use as the thermal barrier component of coating systems for turbine engine components. In particular, zirconia-based materials exhibit both low thermal conductivity and structural stability at high temperature, making them suitable for such applications. Because creep is one of the potential failure modes, and because diffusion is a mechanism by which creep takes place, we have performed computer simulations of cation diffusion in a variety of zirconia-based low-K materials. The kinetic Monte Carlo simulation method is an alternative to the more widely known molecular dynamics (MD) method. It is designed to study "infrequent-event" processes, such as diffusion, for which MD simulation can be highly inefficient. We describe the results of kinetic Monte Carlo computer simulations of cation diffusion in several zirconia-based materials, specifically, zirconia doped with Y, Gd, Nb and Yb. Diffusion paths are identified, and migration energy barriers are obtained from density functional calculations and from the literature. We present results on the temperature dependence of the diffusivity, and on the effects of the presence of oxygen vacancies in cation diffusion barrier complexes as well.

  4. Fast alternative Monte Carlo formalism for a class of problems in biophotonics

    NASA Astrophysics Data System (ADS)

    Miller, Steven D.

    1997-12-01

    A practical and effective, alternative Monte Carlo formalism is presented that rapidly finds flux solutions to the radiative transport equation for a class of problems in biophotonics; namely, wide-beam irradiance of finite, optically anisotropic homogeneous or heterogeneous biomedias, which both strongly scatter and absorb light. Such biomedias include liver, tumors, blood, or highly blood perfused tissues. As Fermat rays comprising a wide coherent (laser) beam enter the tissue, they evolve into a bundle of random optical paths or trajectories due to scattering. Overall, this can be physically interpreted as a bundle of Markov trajectories traced out by a 'gas' of Brownian-like point photons being successively scattered and absorbed. By considering the cumulative flow of a statistical bundle of trajectories through interior data planes, the effective equivalent information of the (generally unknown) analytical flux solutions of the transfer equation rapidly emerges. Unlike the standard Monte Carlo techniques, which evaluate scalar fluence, this technique is faster, more efficient, and simpler to apply for this specific class of optical situations. Other analytical or numerical techniques can either become unwieldy or lack viability or are simply more difficult to apply. Illustrative flux calculations are presented for liver, blood, and tissue-tumor-tissue systems.

  5. DXRaySMCS: a user-friendly interface developed for prediction of diagnostic radiology X-ray spectra produced by Monte Carlo (MCNP-4C) simulation.

    PubMed

    Bahreyni Toossi, M T; Moradi, H; Zare, H

    2008-01-01

    In this work, the general purpose Monte Carlo N-particle radiation transport computer code (MCNP-4C) was used for the simulation of X-ray spectra in diagnostic radiology. The electron's path in the target was followed until its energy was reduced to 10 keV. A user-friendly interface named 'diagnostic X-ray spectra by Monte Carlo simulation (DXRaySMCS)' was developed to facilitate the application of MCNP-4C code for diagnostic radiology spectrum prediction. The program provides a user-friendly interface for: (i) modifying the MCNP input file, (ii) launching the MCNP program to simulate electron and photon transport and (iii) processing the MCNP output file to yield a summary of the results (relative photon number per energy bin). In this article, the development and characteristics of DXRaySMCS are outlined. As part of the validation process, output spectra for 46 diagnostic radiology system settings produced by DXRaySMCS were compared with the corresponding IPEM78. Generally, there is a good agreement between the two sets of spectra. No statistically significant differences have been observed between IPEM78 reported spectra and the simulated spectra generated in this study.

  6. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  7. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  8. The impact of absorption coefficient on polarimetric determination of Berry phase based depth resolved characterization of biomedical scattering samples: a polarized Monte Carlo investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baba, Justin S; Koju, Vijay; John, Dwayne O

    2016-01-01

    The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scatteringmore » sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.« less

  9. A taxonomy of integral reaction path analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grcar, Joseph F.; Day, Marcus S.; Bell, John B.

    2004-12-23

    W. C. Gardiner observed that achieving understanding through combustion modeling is limited by the ability to recognize the implications of what has been computed and to draw conclusions about the elementary steps underlying the reaction mechanism. This difficulty can be overcome in part by making better use of reaction path analysis in the context of multidimensional flame simulations. Following a survey of current practice, an integral reaction flux is formulated in terms of conserved scalars that can be calculated in a fully automated way. Conditional analyses are then introduced, and a taxonomy for bidirectional path analysis is explored. Many examplesmore » illustrate the resulting path analysis and uncover some new results about nonpremixed methane-air laminar jets.« less

  10. Functional integration of vertical flight path and speed control using energy principles

    NASA Technical Reports Server (NTRS)

    Lambregts, A. A.

    1984-01-01

    A generalized automatic flight control system was developed which integrates all longitudinal flight path and speed control functions previously provided by a pitch autopilot and autothrottle. In this design, a net thrust command is computed based on total energy demand arising from both flight path and speed targets. The elevator command is computed based on the energy distribution error between flight path and speed. The engine control is configured to produce the commanded net thrust. The design incorporates control strategies and hierarchy to deal systematically and effectively with all aircraft operational requirements, control nonlinearities, and performance limits. Consistent decoupled maneuver control is achieved for all modes and flight conditions without outer loop gain schedules, control law submodes, or control function duplication.

  11. Cortical Hubs Form a Module for Multisensory Integration on Top of the Hierarchy of Cortical Networks

    PubMed Central

    Zamora-López, Gorka; Zhou, Changsong; Kurths, Jürgen

    2009-01-01

    Sensory stimuli entering the nervous system follow particular paths of processing, typically separated (segregated) from the paths of other modal information. However, sensory perception, awareness and cognition emerge from the combination of information (integration). The corticocortical networks of cats and macaque monkeys display three prominent characteristics: (i) modular organisation (facilitating the segregation), (ii) abundant alternative processing paths and (iii) the presence of highly connected hubs. Here, we study in detail the organisation and potential function of the cortical hubs by graph analysis and information theoretical methods. We find that the cortical hubs form a spatially delocalised, but topologically central module with the capacity to integrate multisensory information in a collaborative manner. With this, we resolve the underlying anatomical substrate that supports the simultaneous capacity of the cortex to segregate and to integrate multisensory information. PMID:20428515

  12. MULTI-POLLUTANT CONCENTRATION MEASUREMENTS AROUND A CONCENTRATED SWINE PRODUCTION FACILITY USING OPEN-PATH FTIR SPECTROMETRY

    EPA Science Inventory

    Open-path Fourier transform infrared (OP/FTIR) spectrometry was used to measure the concentrations of ammonia, methane, and other atmospheric gasses around an integrated industrial swine production facility in eastern North Carolina. Several single-path measurements were made ove...

  13. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

  14. Monte Carlo Simulations and Generation of the SPI Response

    NASA Technical Reports Server (NTRS)

    Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

    2003-01-01

    In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

  15. Probability techniques for reliability analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Wetherhold, Robert C.; Ucci, Anthony M.

    1994-01-01

    Traditional design approaches for composite materials have employed deterministic criteria for failure analysis. New approaches are required to predict the reliability of composite structures since strengths and stresses may be random variables. This report will examine and compare methods used to evaluate the reliability of composite laminae. The two types of methods that will be evaluated are fast probability integration (FPI) methods and Monte Carlo methods. In these methods, reliability is formulated as the probability that an explicit function of random variables is less than a given constant. Using failure criteria developed for composite materials, a function of design variables can be generated which defines a 'failure surface' in probability space. A number of methods are available to evaluate the integration over the probability space bounded by this surface; this integration delivers the required reliability. The methods which will be evaluated are: the first order, second moment FPI methods; second order, second moment FPI methods; the simple Monte Carlo; and an advanced Monte Carlo technique which utilizes importance sampling. The methods are compared for accuracy, efficiency, and for the conservativism of the reliability estimation. The methodology involved in determining the sensitivity of the reliability estimate to the design variables (strength distributions) and importance factors is also presented.

  16. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  17. From conformal blocks to path integrals in the Vaidya geometry

    NASA Astrophysics Data System (ADS)

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin; Sonner, Julian

    2017-09-01

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order to correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.

  18. Real-time path planning and autonomous control for helicopter autorotation

    NASA Astrophysics Data System (ADS)

    Yomchinda, Thanan

    Autorotation is a descending maneuver that can be used to recover helicopters in the event of total loss of engine power; however it is an extremely difficult and complex maneuver. The objective of this work is to develop a real-time system which provides full autonomous control for autorotation landing of helicopters. The work includes the development of an autorotation path planning method and integration of the path planner with a primary flight control system. The trajectory is divided into three parts: entry, descent and flare. Three different optimization algorithms are used to generate trajectories for each of these segments. The primary flight control is designed using a linear dynamic inversion control scheme, and a path following control law is developed to track the autorotation trajectories. Details of the path planning algorithm, trajectory following control law, and autonomous autorotation system implementation are presented. The integrated system is demonstrated in real-time high fidelity simulations. Results indicate feasibility of the capability of the algorithms to operate in real-time and of the integrated systems ability to provide safe autorotation landings. Preliminary simulations of autonomous autorotation on a small UAV are presented which will lead to a final hardware demonstration of the algorithms.

  19. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind.

    PubMed

    Luo, He; Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided.

  20. Integrated optimization of unmanned aerial vehicle task allocation and path planning under steady wind

    PubMed Central

    Liang, Zhengzheng; Zhu, Moning; Hu, Xiaoxuan; Wang, Guoqiang

    2018-01-01

    Wind has a significant effect on the control of fixed-wing unmanned aerial vehicles (UAVs), resulting in changes in their ground speed and direction, which has an important influence on the results of integrated optimization of UAV task allocation and path planning. The objective of this integrated optimization problem changes from minimizing flight distance to minimizing flight time. In this study, the Euclidean distance between any two targets is expanded to the Dubins path length, considering the minimum turning radius of fixed-wing UAVs. According to the vector relationship between wind speed, UAV airspeed, and UAV ground speed, a method is proposed to calculate the flight time of UAV between targets. On this basis, a variable-speed Dubins path vehicle routing problem (VS-DP-VRP) model is established with the purpose of minimizing the time required for UAVs to visit all the targets and return to the starting point. By designing a crossover operator and mutation operator, the genetic algorithm is used to solve the model, the results of which show that an effective UAV task allocation and path planning solution under steady wind can be provided. PMID:29561888

  1. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  2. Effects of superspreaders in spread of epidemic

    NASA Astrophysics Data System (ADS)

    Fujie, Ryo; Odagaki, Takashi

    2007-02-01

    Within the standard SIR model with spatial structure, we propose two models for the superspreader. In one model, superspreaders have intrinsically strong infectiousness. In other model, they have many social connections. By Monte Carlo simulation, we obtain the percolation probability, the propagation speed, the epidemic curve, the distribution of secondary infected and the propagation path as functions of population and the density of superspreaders. By comparing the results with the data of SARS in Singapore 2003, we conclude that the latter model can explain the observation.

  3. Hamiltonian and potentials in derivative pricing models: exact results and lattice simulations

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2004-03-01

    The pricing of options, warrants and other derivative securities is one of the great success of financial economics. These financial products can be modeled and simulated using quantum mechanical instruments based on a Hamiltonian formulation. We show here some applications of these methods for various potentials, which we have simulated via lattice Langevin and Monte Carlo algorithms, to the pricing of options. We focus on barrier or path dependent options, showing in some detail the computational strategies involved.

  4. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less

  5. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory.

    PubMed

    Marsalek, Ondrej; Markland, Thomas E

    2016-02-07

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding as a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.

  6. The unbiasedness of a generalized mirage boundary correction method for Monte Carlo integration estimators of volume

    Treesearch

    Thomas B. Lynch; Jeffrey H. Gove

    2014-01-01

    The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...

  7. Assessment of Hydrogen Sulfide Minimum Detection Limits of an Open Path Tunable Diode Laser

    EPA Science Inventory

    During June 2007, U.S. EPA conducted a feasibility study to determine whether the EPA OTM 10 measurement approach, also known as radial plume mapping (RPM), was feasible. A Boreal open-path tunable diode laser (OP-TDL) to collect path-integrated hydrogen sulfide measurements alon...

  8. Creativity, Spirituality, and Transcendence: Paths to Integrity and Wisdom in the Mature Self. Publications in Creativity Research.

    ERIC Educational Resources Information Center

    Miller, Melvin E., Ed.; Cook-Greuter, Susanne R., Ed.

    This book contains 11 papers on creativity, spirituality, and transcendence as paths to integrity and wisdom in the mature self. The book begins with the paper "Introduction--Creativity in Adulthood: Personal Maturity and Openness to Extraordinary Sources of Inspiration" (Susanne R. Cook-Greuter, Melvin E. Miller). The next four papers,…

  9. Derivation of the Schrodinger Equation from the Hamilton-Jacobi Equation in Feynman's Path Integral Formulation of Quantum Mechanics

    ERIC Educational Resources Information Center

    Field, J. H.

    2011-01-01

    It is shown how the time-dependent Schrodinger equation may be simply derived from the dynamical postulate of Feynman's path integral formulation of quantum mechanics and the Hamilton-Jacobi equation of classical mechanics. Schrodinger's own published derivations of quantum wave equations, the first of which was also based on the Hamilton-Jacobi…

  10. Finding the way with a noisy brain.

    PubMed

    Cheung, Allen; Vickerstaff, Robert

    2010-11-11

    Successful navigation is fundamental to the survival of nearly every animal on earth, and achieved by nervous systems of vastly different sizes and characteristics. Yet surprisingly little is known of the detailed neural circuitry from any species which can accurately represent space for navigation. Path integration is one of the oldest and most ubiquitous navigation strategies in the animal kingdom. Despite a plethora of computational models, from equational to neural network form, there is currently no consensus, even in principle, of how this important phenomenon occurs neurally. Recently, all path integration models were examined according to a novel, unifying classification system. Here we combine this theoretical framework with recent insights from directed walk theory, and develop an intuitive yet mathematically rigorous proof that only one class of neural representation of space can tolerate noise during path integration. This result suggests many existing models of path integration are not biologically plausible due to their intolerance to noise. This surprising result imposes significant computational limitations on the neurobiological spatial representation of all successfully navigating animals, irrespective of species. Indeed, noise-tolerance may be an important functional constraint on the evolution of neuroarchitectural plans in the animal kingdom.

  11. Quantum Mechanics, Path Integrals and Option Pricing:. Reducing the Complexity of Finance

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Corianò, Claudio; Srikant, Marakani

    2003-04-01

    Quantum Finance represents the synthesis of the techniques of quantum theory (quantum mechanics and quantum field theory) to theoretical and applied finance. After a brief overview of the connection between these fields, we illustrate some of the methods of lattice simulations of path integrals for the pricing of options. The ideas are sketched out for simple models, such as the Black-Scholes model, where analytical and numerical results are compared. Application of the method to nonlinear systems is also briefly overviewed. More general models, for exotic or path-dependent options are discussed.

  12. Path integration of the time-dependent forced oscillator with a two-time quadratic action

    NASA Astrophysics Data System (ADS)

    Zhang, Tian Rong; Cheng, Bin Kang

    1986-03-01

    Using the prodistribution theory proposed by DeWitt-Morette [C. DeWitt-Morette, Commun. Math. Phys. 28, 47 (1972); C. DeWitt-Morette, A. Maheshwari, and B. Nelson, Phys. Rep. 50, 257 (1979)], the path integration of a time-dependent forced harmonic oscillator with a two-time quadratic action has been given in terms of the solutions of some integrodifferential equations. We then evaluate explicitly both the classical path and the propagator for the specific kernel introduced by Feynman in the polaron problem. Our results include the previous known results as special cases.

  13. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    PubMed

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Numerical integration of detector response functions via Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  15. Numerical integration of detector response functions via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.

    2017-09-01

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.

  16. Numerical integration of detector response functions via Monte Carlo simulations

    DOE PAGES

    Kelly, Keegan John; O'Donnell, John M.; Gomez, Jaime A.; ...

    2017-06-13

    Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated inmore » this way can be used to create Monte Carlo simulation output spectra a factor of ~1000× faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. Here, this method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.« less

  17. Integration across Time Determines Path Deviation Discrimination for Moving Objects

    PubMed Central

    Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.

    2008-01-01

    Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653

  18. IntPath--an integrated pathway gene relationship database for model organisms and important pathogens

    PubMed Central

    2012-01-01

    Background Pathway data are important for understanding the relationship between genes, proteins and many other molecules in living organisms. Pathway gene relationships are crucial information for guidance, prediction, reference and assessment in biochemistry, computational biology, and medicine. Many well-established databases--e.g., KEGG, WikiPathways, and BioCyc--are dedicated to collecting pathway data for public access. However, the effectiveness of these databases is hindered by issues such as incompatible data formats, inconsistent molecular representations, inconsistent molecular relationship representations, inconsistent referrals to pathway names, and incomprehensive data from different databases. Results In this paper, we overcome these issues through extraction, normalization and integration of pathway data from several major public databases (KEGG, WikiPathways, BioCyc, etc). We build a database that not only hosts our integrated pathway gene relationship data for public access but also maintains the necessary updates in the long run. This public repository is named IntPath (Integrated Pathway gene relationship database for model organisms and important pathogens). Four organisms--S. cerevisiae, M. tuberculosis H37Rv, H. Sapiens and M. musculus--are included in this version (V2.0) of IntPath. IntPath uses the "full unification" approach to ensure no deletion and no introduced noise in this process. Therefore, IntPath contains much richer pathway-gene and pathway-gene pair relationships and much larger number of non-redundant genes and gene pairs than any of the single-source databases. The gene relationships of each gene (measured by average node degree) per pathway are significantly richer. The gene relationships in each pathway (measured by average number of gene pairs per pathway) are also considerably richer in the integrated pathways. Moderate manual curation are involved to get rid of errors and noises from source data (e.g., the gene ID errors in WikiPathways and relationship errors in KEGG). We turn complicated and incompatible xml data formats and inconsistent gene and gene relationship representations from different source databases into normalized and unified pathway-gene and pathway-gene pair relationships neatly recorded in simple tab-delimited text format and MySQL tables, which facilitates convenient automatic computation and large-scale referencing in many related studies. IntPath data can be downloaded in text format or MySQL dump. IntPath data can also be retrieved and analyzed conveniently through web service by local programs or through web interface by mouse clicks. Several useful analysis tools are also provided in IntPath. Conclusions We have overcome in IntPath the issues of compatibility, consistency, and comprehensiveness that often hamper effective use of pathway databases. We have included four organisms in the current release of IntPath. Our methodology and programs described in this work can be easily applied to other organisms; and we will include more model organisms and important pathogens in future releases of IntPath. IntPath maintains regular updates and is freely available at http://compbio.ddns.comp.nus.edu.sg:8080/IntPath. PMID:23282057

  19. Accurate Monte Carlo simulations for nozzle design, commissioning and quality assurance for a proton radiation therapy facility.

    PubMed

    Paganetti, H; Jiang, H; Lee, S Y; Kooy, H M

    2004-07-01

    Monte Carlo dosimetry calculations are essential methods in radiation therapy. To take full advantage of this tool, the beam delivery system has to be simulated in detail and the initial beam parameters have to be known accurately. The modeling of the beam delivery system itself opens various areas where Monte Carlo calculations prove extremely helpful, such as for design and commissioning of a therapy facility as well as for quality assurance verification. The gantry treatment nozzles at the Northeast Proton Therapy Center (NPTC) at Massachusetts General Hospital (MGH) were modeled in detail using the GEANT4.5.2 Monte Carlo code. For this purpose, various novel solutions for simulating irregular shaped objects in the beam path, like contoured scatterers, patient apertures or patient compensators, were found. The four-dimensional, in time and space, simulation of moving parts, such as the modulator wheel, was implemented. Further, the appropriate physics models and cross sections for proton therapy applications were defined. We present comparisons between measured data and simulations. These show that by modeling the treatment nozzle with millimeter accuracy, it is possible to reproduce measured dose distributions with an accuracy in range and modulation width, in the case of a spread-out Bragg peak (SOBP), of better than 1 mm. The excellent agreement demonstrates that the simulations can even be used to generate beam data for commissioning treatment planning systems. The Monte Carlo nozzle model was used to study mechanical optimization in terms of scattered radiation and secondary radiation in the design of the nozzles. We present simulations on the neutron background. Further, the Monte Carlo calculations supported commissioning efforts in understanding the sensitivity of beam characteristics and how these influence the dose delivered. We present the sensitivity of dose distributions in water with respect to various beam parameters and geometrical misalignments. This allows the definition of tolerances for quality assurance and the design of quality assurance procedures.

  20. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.

  1. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  2. Monte Carlo treatment planning with modulated electron radiotherapy: framework development and application

    NASA Astrophysics Data System (ADS)

    Alexander, Andrew William

    Within the field of medical physics, Monte Carlo radiation transport simulations are considered to be the most accurate method for the determination of dose distributions in patients. The McGill Monte Carlo treatment planning system (MMCTP), provides a flexible software environment to integrate Monte Carlo simulations with current and new treatment modalities. A developing treatment modality called energy and intensity modulated electron radiotherapy (MERT) is a promising modality, which has the fundamental capabilities to enhance the dosimetry of superficial targets. An objective of this work is to advance the research and development of MERT with the end goal of clinical use. To this end, we present the MMCTP system with an integrated toolkit for MERT planning and delivery of MERT fields. Delivery is achieved using an automated "few leaf electron collimator" (FLEC) and a controller. Aside from the MERT planning toolkit, the MMCTP system required numerous add-ons to perform the complex task of large-scale autonomous Monte Carlo simulations. The first was a DICOM import filter, followed by the implementation of DOSXYZnrc as a dose calculation engine and by logic methods for submitting and updating the status of Monte Carlo simulations. Within this work we validated the MMCTP system with a head and neck Monte Carlo recalculation study performed by a medical dosimetrist. The impact of MMCTP lies in the fact that it allows for systematic and platform independent large-scale Monte Carlo dose calculations for different treatment sites and treatment modalities. In addition to the MERT planning tools, various optimization algorithms were created external to MMCTP. The algorithms produced MERT treatment plans based on dose volume constraints that employ Monte Carlo pre-generated patient-specific kernels. The Monte Carlo kernels are generated from patient-specific Monte Carlo dose distributions within MMCTP. The structure of the MERT planning toolkit software and optimization algorithms are demonstrated. We investigated the clinical significance of MERT on spinal irradiation, breast boost irradiation, and a head and neck sarcoma cancer site using several parameters to analyze the treatment plans. Finally, we investigated the idea of mixed beam photon and electron treatment planning. Photon optimization treatment planning tools were included within the MERT planning toolkit for the purpose of mixed beam optimization. In conclusion, this thesis work has resulted in the development of an advanced framework for photon and electron Monte Carlo treatment planning studies and the development of an inverse planning system for photon, electron or mixed beam radiotherapy (MBRT). The justification and validation of this work is found within the results of the planning studies, which have demonstrated dosimetric advantages to using MERT or MBRT in comparison to clinical treatment alternatives.

  3. Bayesian Framework for Water Quality Model Uncertainty Estimation and Risk Management

    EPA Science Inventory

    A formal Bayesian methodology is presented for integrated model calibration and risk-based water quality management using Bayesian Monte Carlo simulation and maximum likelihood estimation (BMCML). The primary focus is on lucid integration of model calibration with risk-based wat...

  4. A global solution to the Schrödinger equation: From Henstock to Feynman

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathanson, Ekaterina S., E-mail: enathanson@ggc.edu; Jørgensen, Palle E. T., E-mail: palle-jorgensen@uiowa.edu

    2015-09-15

    One of the key elements of Feynman’s formulation of non-relativistic quantum mechanics is a so-called Feynman path integral. It plays an important role in the theory, but it appears as a postulate based on intuition, rather than a well-defined object. All previous attempts to supply Feynman’s theory with rigorous mathematics underpinning, based on the physical requirements, have not been satisfactory. The difficulty comes from the need to define a measure on the infinite dimensional space of paths and to create an integral that would possess all of the properties requested by Feynman. In the present paper, we consider a newmore » approach to defining the Feynman path integral, based on the theory developed by Muldowney [A Modern Theory of Random Variable: With Applications in Stochastic Calcolus, Financial Mathematics, and Feynman Integration (John Wiley & Sons, Inc., New Jersey, 2012)]. Muldowney uses the Henstock integration technique and deals with non-absolute integrability of the Fresnel integrals, in order to obtain a representation of the Feynman path integral as a functional. This approach offers a mathematically rigorous definition supporting Feynman’s intuitive derivations. But in his work, Muldowney gives only local in space-time solutions. A physical solution to the non-relativistic Schrödinger equation must be global, and it must be given in the form of a unitary one-parameter group in L{sup 2}(ℝ{sup n}). The purpose of this paper is to show that a system of one-dimensional local Muldowney’s solutions may be extended to yield a global solution. Moreover, the global extension can be represented by a unitary one-parameter group acting in L{sup 2}(ℝ{sup n})« less

  5. Processor Would Find Best Paths On Map

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P.

    1990-01-01

    Proposed very-large-scale integrated (VLSI) circuit image-data processor finds path of least cost from specified origin to any destination on map. Cost of traversal assigned to each picture element of map. Path of least cost from originating picture element to every other picture element computed as path that preserves as much as possible of signal transmitted by originating picture element. Dedicated microprocessor at each picture element stores cost of traversal and performs its share of computations of paths of least cost. Least-cost-path problem occurs in research, military maneuvers, and in planning routes of vehicles.

  6. Computer calculation of Witten's 3-manifold invariant

    NASA Astrophysics Data System (ADS)

    Freed, Daniel S.; Gompf, Robert E.

    1991-10-01

    Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant.

  7. Path integral analysis of Jarzynski's equality: Analytical results

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Adib, Artur B.

    2009-02-01

    We apply path integrals to study nonequilibrium work theorems in the context of Brownian dynamics, deriving in particular the equations of motion governing the most typical and most dominant trajectories. For the analytically soluble cases of a moving harmonic potential and a harmonic oscillator with a time-dependent natural frequency, we find such trajectories, evaluate the work-weighted propagators, and validate Jarzynski’s equality.

  8. Book Review:

    NASA Astrophysics Data System (ADS)

    Louko, Jorma

    2007-04-01

    Bastianelli and van Nieuwenhuizen's monograph `Path Integrals and Anomalies in Curved Space' collects in one volume the results of the authors' 15-year research programme on anomalies that arise in Feynman diagrams of quantum field theories on curved manifolds. The programme was spurred by the path-integral techniques introduced in Alvarez-Gaumé and Witten's renowned 1983 paper on gravitational anomalies which, together with the anomaly cancellation paper by Green and Schwarz, led to the string theory explosion of the 1980s. The authors have produced a tour de force, giving a comprehensive and pedagogical exposition of material that is central to current research. The first part of the book develops from scratch a formalism for defining and evaluating quantum mechanical path integrals in nonlinear sigma models, using time slicing regularization, mode regularization and dimensional regularization. The second part applies this formalism to quantum fields of spin 0, 1/2, 1 and 3/2 and to self-dual antisymmetric tensor fields. The book concludes with a discussion of gravitational anomalies in 10-dimensional supergravities, for both classical and exceptional gauge groups. The target audience is researchers and graduate students in curved spacetime quantum field theory and string theory, and the aims, style and pedagogical level have been chosen with this audience in mind. Path integrals are treated as calculational tools, and the notation and terminology are throughout tailored to calculational convenience, rather than to mathematical rigour. The style is closer to that of an exceedingly thorough and self-contained review article than to that of a textbook. As the authors mention, the first part of the book can be used as an introduction to path integrals in quantum mechanics, although in a classroom setting perhaps more likely as supplementary reading than a primary class text. Readers outside the core audience, including this reviewer, will gain from the book a heightened appreciation of the central role of regularization as a defining ingredient of a quantum field theory and will be impressed by the agreement of results arising from different regularization schemes. The readers may in particular enjoy the authors' `brief history of anomalies' in quantum field theory, as well as a similar historical discussion of path integrals in quantum mechanics.

  9. A Systematic Approach for Computing Zero-Point Energy, Quantum Partition Function, and Tunneling Effect Based on Kleinert's Variational Perturbation Theory.

    PubMed

    Wong, Kin-Yiu; Gao, Jiali

    2008-09-09

    In this paper, we describe an automated integration-free path-integral (AIF-PI) method, based on Kleinert's variational perturbation (KP) theory, to treat internuclear quantum-statistical effects in molecular systems. We have developed an analytical method to obtain the centroid potential as a function of the variational parameter in the KP theory, which avoids numerical difficulties in path-integral Monte Carlo or molecular dynamics simulations, especially at the limit of zero-temperature. Consequently, the variational calculations using the KP theory can be efficiently carried out beyond the first order, i.e., the Giachetti-Tognetti-Feynman-Kleinert variational approach, for realistic chemical applications. By making use of the approximation of independent instantaneous normal modes (INM), the AIF-PI method can readily be applied to many-body systems. Previously, we have shown that in the INM approximation, the AIF-PI method is accurate for computing the quantum partition function of a water molecule (3 degrees of freedom) and the quantum correction factor for the collinear H(3) reaction rate (2 degrees of freedom). In this work, the accuracy and properties of the KP theory are further investigated by using the first three order perturbations on an asymmetric double-well potential, the bond vibrations of H(2), HF, and HCl represented by the Morse potential, and a proton-transfer barrier modeled by the Eckart potential. The zero-point energy, quantum partition function, and tunneling factor for these systems have been determined and are found to be in excellent agreement with the exact quantum results. Using our new analytical results at the zero-temperature limit, we show that the minimum value of the computed centroid potential in the KP theory is in excellent agreement with the ground state energy (zero-point energy) and the position of the centroid potential minimum is the expectation value of particle position in wave mechanics. The fast convergent property of the KP theory is further examined in comparison with results from the traditional Rayleigh-Ritz variational approach and Rayleigh-Schrödinger perturbation theory in wave mechanics. The present method can be used for thermodynamic and quantum dynamic calculations, including to systematically determine the exact value of zero-point energy and to study kinetic isotope effects for chemical reactions in solution and in enzymes.

  10. Comparative investigation of pure and mixed rare gas atoms on coronene molecules.

    PubMed

    Rodríguez-Cantano, Rocío; Bartolomei, Massimiliano; Hernández, Marta I; Campos-Martínez, José; González-Lezana, Tomás; Villarreal, Pablo; Pérez de Tudela, Ricardo; Pirani, Fernando; Hernández-Rojas, Javier; Bretón, José

    2017-01-21

    Clusters formed by the combination of rare gas (RG) atoms of He, Ne, Ar, and Kr on coronene have been investigated by means of a basin-hopping algorithm and path integral Monte Carlo calculations at T = 2 K. Energies and geometries have been obtained and the role played by the specific RG-RG and RG-coronene interactions on the final results is analysed in detail. Signatures of diffuse behavior of the He atoms on the surface of the coronene are in contrast with the localization of the heavier species, Ar and Kr. The observed coexistence of various geometries for Ne suggests the motion of the RG atoms on the multi-well potential energy surface landscape offered by the coronene. Therefore, the investigation of different clusters enables a comparative analysis of localized versus non-localized features. Mixed Ar-He-coronene clusters have also been considered and the competition of the RG atoms to occupy the docking sites on the molecule is discussed. All the obtained information is crucial to assess the behavior of coronene, a prototypical polycyclic aromatic hydrocarbon clustering with RG atoms at a temperature close to that of interstellar medium, which arises from the critical balance of the interactions involved.

  11. A new dynamical atmospheric ionizing radiation (AIR) model for epidemiological studies

    NASA Technical Reports Server (NTRS)

    De Angelis, G.; Clem, J. M.; Goldhagen, P. E.; Wilson, J. W.

    2003-01-01

    A new Atmospheric Ionizing Radiation (AIR) model is currently being developed for use in radiation dose evaluation in epidemiological studies targeted to atmospheric flight personnel such as civilian airlines crewmembers. The model will allow computing values for biologically relevant parameters, e.g. dose equivalent and effective dose, for individual flights from 1945. Each flight is described by its actual three dimensional flight profile, i.e. geographic coordinates and altitudes varying with time. Solar modulated primary particles are filtered with a new analytical fully angular dependent geomagnetic cut off rigidity model, as a function of latitude, longitude, arrival direction, altitude and time. The particle transport results have been obtained with a technique based on the three-dimensional Monte Carlo transport code FLUKA, with a special procedure to deal with HZE particles. Particle fluxes are transformed into dose-related quantities and then integrated all along the flight path to obtain the overall flight dose. Preliminary validations of the particle transport technique using data from the AIR Project ER-2 flight campaign of measurements are encouraging. Future efforts will deal with modeling of the effects of the aircraft structure as well as inclusion of solar particle events. Published by Elsevier Ltd on behalf of COSPAR.

  12. Role of temperature on static correlational properties in a spin-polarized electron gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, Priya; Moudgil, R. K., E-mail: rkmoudgil@kuk.ac.in; Kumar, Krishan

    We have studied the effect of temperature on the static correlational properties of a spin-polarized three-dimensional electron gas (3DEG) over a wide coupling and temperature regime. This problem has been very recently studied by Brown et al. using the restricted path-integral Monte Carlo (RPIMC) technique in the warm-dense regime. To this endeavor, we have used the finite temperature version of the dynamical mean-field theory of Singwi et al, the so-called quantum STLS (qSTLS) approach. The static density structure factor and the static pair-correlation function are calculated, and compared with the RPIMC simulation data. We find an excellent agreement with themore » simulation at high temperature over a wide coupling range. However, the agreement is seen to somewhat deteriorate with decreasing temperature. The pair-correlation function is found to become small negative for small electron separation. This may be attributed to the inadequacy of the mean-field theory in dealing with the like spin electron correlations in the strong-coupling domain. A nice agreement with RPIMC data at high temperature seems to arise due to weakening of both the exchange and coulomb correlations with rising temperature.« less

  13. The adsorption of helium atoms on coronene cations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzthaler, Thomas; Rasul, Bilal; Kuhn, Martin

    2016-08-14

    We report the first experimental study of the attachment of multiple foreign atoms to a cationic polycyclic aromatic hydrocarbon (PAH). The chosen PAH was coronene, C{sub 24}H{sub 12}, which was added to liquid helium nanodroplets and then subjected to electron bombardment. Using mass spectrometry, coronene cations decorated with helium atoms were clearly seen and the spectrum shows peaks with anomalously high intensities (“magic number” peaks), which represent ion-helium complexes with added stability. The data suggest the formation of a rigid helium layer consisting of 38 helium atoms that completely cover both faces of the coronene ion. Additional magic numbers canmore » be seen for the further addition of 3 and 6 helium atoms, which are thought to attach to the edge of the coronene. The observation of magic numbers for the addition of 38 and 44 helium atoms is in good agreement with a recent path integral Monte Carlo prediction for helium atoms on neutral coronene. An understanding of how atoms and molecules attach to PAH ions is important for a number of reasons including the potential role such complexes might play in the chemistry of the interstellar medium.« less

  14. Acetylcholine contributes to the integration of self-movement cues in head direction cells.

    PubMed

    Yoder, Ryan M; Chan, Jeremy H M; Taube, Jeffrey S

    2017-08-01

    Acetylcholine contributes to accurate performance on some navigational tasks, but details of its contribution to the underlying brain signals are not fully understood. The medial septal area provides widespread cholinergic input to various brain regions, but selective damage to medial septal cholinergic neurons generally has little effect on landmark-based navigation, or the underlying neural representations of location and directional heading in visual environments. In contrast, the loss of medial septal cholinergic neurons disrupts navigation based on path integration, but no studies have tested whether these path integration deficits are associated with disrupted head direction (HD) cell activity. Therefore, we evaluated HD cell responses to visual cue rotations in a familiar arena, and during navigation between familiar and novel arenas, after muscarinic receptor blockade with systemic atropine. Atropine treatment reduced the peak firing rate of HD cells, but failed to significantly affect other HD cell firing properties. Atropine also failed to significantly disrupt the dominant landmark control of the HD signal, even though we used a procedure that challenged this landmark control. In contrast, atropine disrupted HD cell stability during navigation between familiar and novel arenas, where path integration normally maintains a consistent HD cell signal across arenas. These results suggest that acetylcholine contributes to path integration, in part, by facilitating the use of idiothetic cues to maintain a consistent representation of directional heading. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Path Integrals for Electronic Densities, Reactivity Indices, and Localization Functions in Quantum Systems

    PubMed Central

    Putz, Mihai V.

    2009-01-01

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr’s quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions – all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems. PMID:20087467

  16. Path integrals for electronic densities, reactivity indices, and localization functions in quantum systems.

    PubMed

    Putz, Mihai V

    2009-11-10

    The density matrix theory, the ancestor of density functional theory, provides the immediate framework for Path Integral (PI) development, allowing the canonical density be extended for the many-electronic systems through the density functional closure relationship. Yet, the use of path integral formalism for electronic density prescription presents several advantages: assures the inner quantum mechanical description of the system by parameterized paths; averages the quantum fluctuations; behaves as the propagator for time-space evolution of quantum information; resembles Schrödinger equation; allows quantum statistical description of the system through partition function computing. In this framework, four levels of path integral formalism were presented: the Feynman quantum mechanical, the semiclassical, the Feynman-Kleinert effective classical, and the Fokker-Planck non-equilibrium ones. In each case the density matrix or/and the canonical density were rigorously defined and presented. The practical specializations for quantum free and harmonic motions, for statistical high and low temperature limits, the smearing justification for the Bohr's quantum stability postulate with the paradigmatic Hydrogen atomic excursion, along the quantum chemical calculation of semiclassical electronegativity and hardness, of chemical action and Mulliken electronegativity, as well as by the Markovian generalizations of Becke-Edgecombe electronic focalization functions - all advocate for the reliability of assuming PI formalism of quantum mechanics as a versatile one, suited for analytically and/or computationally modeling of a variety of fundamental physical and chemical reactivity concepts characterizing the (density driving) many-electronic systems.

  17. Users guide to E859 phoswich analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costales, J.B.

    1992-11-30

    In this memo the authors describe the analysis path used to transform the phoswich data from raw data banks into cross sections suitable for publication. The primary purpose of this memo is not to document each analysis step in great detail but rather to point the reader to the fortran code used and to point out the essential features of the analysis path. A flow chart which summarizes the various steps performed to massage the data from beginning to end is given. In general, each step corresponds to a fortran program which was written to perform that particular task. Themore » automation of the data analysis has been kept purposefully minimal in order to ensure the highest quality of the final product. However, tools have been developed which ease the non--automated steps. There are two major parallel routes for the data analysis: data reduction and acceptance determination using detailed GEANT Monte Carlo simulations. In this memo, the authors will first describe the data reduction up to the point where PHAD banks (Pass 1-like banks) are created. They the will describe the steps taken in the GEANT Monte Carlo route. Note that a detailed memo describing the methodology of the acceptance corrections has already been written. Therefore the discussion of the acceptance determination will be kept to a minimum and the reader will be referred to the other memo for further details. Finally, they will describe the cross section formation process and how final spectra are extracted.« less

  18. Probabilistic structural analysis using a general purpose finite element program

    NASA Astrophysics Data System (ADS)

    Riha, D. S.; Millwater, H. R.; Thacker, B. H.

    1992-07-01

    This paper presents an accurate and efficient method to predict the probabilistic response for structural response quantities, such as stress, displacement, natural frequencies, and buckling loads, by combining the capabilities of MSC/NASTRAN, including design sensitivity analysis and fast probability integration. Two probabilistic structural analysis examples have been performed and verified by comparison with Monte Carlo simulation of the analytical solution. The first example consists of a cantilevered plate with several point loads. The second example is a probabilistic buckling analysis of a simply supported composite plate under in-plane loading. The coupling of MSC/NASTRAN and fast probability integration is shown to be orders of magnitude more efficient than Monte Carlo simulation with excellent accuracy.

  19. Monte Carlo Simulations of Background Spectra in Integral Imager Detectors

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.; Dietz, K. L.; Ramsey, B. D.; Weisskopf, M. C.

    1998-01-01

    Predictions of the expected gamma-ray backgrounds in the ISGRI (CdTe) and PiCsIT (Csl) detectors on INTEGRAL due to cosmic-ray interactions and the diffuse gamma-ray background have been made using a coupled set of Monte Carlo radiation transport codes (HETC, FLUKA, EGS4, and MORSE) and a detailed, 3-D mass model of the spacecraft and detector assemblies. The simulations include both the prompt background component from induced hadronic and electromagnetic cascades and the delayed component due to emissions from induced radioactivity. Background spectra have been obtained with and without the use of active (BGO) shielding and charged particle rejection to evaluate the effectiveness of anticoincidence counting on background rejection.

  20. The impact of self-driving cars on existing transportation networks

    NASA Astrophysics Data System (ADS)

    Ji, Xiang

    2018-04-01

    In this paper, considering the usage of self-driving, I research the congestion problems of traffic networks from both macro and micro levels. Firstly, the macroscopic mathematical model is established using the Greenshields function, analytic hierarchy process and Monte Carlo simulation, where the congestion level is divided into five levels according to the average vehicle speed. The roads with an obvious congestion situation is investigated mainly and the traffic flow and topology of the roads are analyzed firstly. By processing the data, I propose a traffic congestion model. In the model, I assume that half of the non-self-driving cars only take the shortest route and the other half can choose the path randomly. While self-driving cars can obtain vehicle density data of each road and choose the path more reasonable. When the path traffic density exceeds specific value, it cannot be selected. To overcome the dimensional differences of data, I rate the paths by BORDA sorting. The Monte Carlo simulation of Cellular Automaton is used to obtain the negative feedback information of the density of the traffic network, where the vehicles are added into the road network one by one. I then analyze the influence of negative feedback information on path selection of intelligent cars. The conclusion is that the increase of the proportion of intelligent vehicles will make the road load more balanced, and the self-driving cars can avoid the peak and reduce the degree of road congestion. Combined with other models, the optimal self-driving ratio is about sixty-two percent. From the microscopic aspect, by using the single-lane traffic NS rule, another model is established to analyze the road Partition scheme. The self-driving traffic is more intelligent, and their cooperation can reduce the random deceleration probability. By the model, I get the different self-driving ratio of space-time distribution. I also simulate the case of making a lane separately for self-driving, compared to the former model. It is concluded that a single lane is more efficient in a certain interval. However, it is not recommended to offer a lane separately. However, the self-driving also faces the problem of hacker attacks and greater damage after fault. So, when self-driving ratio is higher than a certain value, the increase of traffic flow rate is small. In this article, that value is discussed, and the optimal proportion is determined. Finally, I give a nontechnical explanation of the problem.

  1. Ab initio molecular dynamics with nuclear quantum effects at classical cost: Ring polymer contraction for density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsalek, Ondrej; Markland, Thomas E., E-mail: tmarkland@stanford.edu

    Path integral molecular dynamics simulations, combined with an ab initio evaluation of interactions using electronic structure theory, incorporate the quantum mechanical nature of both the electrons and nuclei, which are essential to accurately describe systems containing light nuclei. However, path integral simulations have traditionally required a computational cost around two orders of magnitude greater than treating the nuclei classically, making them prohibitively costly for most applications. Here we show that the cost of path integral simulations can be dramatically reduced by extending our ring polymer contraction approach to ab initio molecular dynamics simulations. By using density functional tight binding asmore » a reference system, we show that our ring polymer contraction scheme gives rapid and systematic convergence to the full path integral density functional theory result. We demonstrate the efficiency of this approach in ab initio simulations of liquid water and the reactive protonated and deprotonated water dimer systems. We find that the vast majority of the nuclear quantum effects are accurately captured using contraction to just the ring polymer centroid, which requires the same number of density functional theory calculations as a classical simulation. Combined with a multiple time step scheme using the same reference system, which allows the time step to be increased, this approach is as fast as a typical classical ab initio molecular dynamics simulation and 35× faster than a full path integral calculation, while still exactly including the quantum sampling of nuclei. This development thus offers a route to routinely include nuclear quantum effects in ab initio molecular dynamics simulations at negligible computational cost.« less

  2. System and method for interfacing large-area electronics with integrated circuit devices

    DOEpatents

    Verma, Naveen; Glisic, Branko; Sturm, James; Wagner, Sigurd

    2016-07-12

    A system and method for interfacing large-area electronics with integrated circuit devices is provided. The system may be implemented in an electronic device including a large area electronic (LAE) device disposed on a substrate. An integrated circuit IC is disposed on the substrate. A non-contact interface is disposed on the substrate and coupled between the LAE device and the IC. The non-contact interface is configured to provide at least one of a data acquisition path or control path between the LAE device and the IC.

  3. Path-integral representation for the relativistic particle propagators and BFV quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fradkin, E.S.; Gitman, D.M.

    1991-11-15

    The path-integral representations for the propagators of scalar and spinor fields in an external electromagnetic field are derived. The Hamiltonian form of such expressions can be interpreted in the sense of Batalin-Fradkin-Vilkovisky quantization of one-particle theory. The Lagrangian representation as derived allows one to extract in a natural way the expressions for the corresponding gauge-invariant (reparametrization- and supergauge-invariant) actions for pointlike scalar and spinning particles. At the same time, the measure and ranges of integrations, admissible gauge conditions, and boundary conditions can be exactly established.

  4. i-PI: A Python interface for ab initio path integral molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Ceriotti, Michele; More, Joshua; Manolopoulos, David E.

    2014-03-01

    Recent developments in path integral methodology have significantly reduced the computational expense of including quantum mechanical effects in the nuclear motion in ab initio molecular dynamics simulations. However, the implementation of these developments requires a considerable programming effort, which has hindered their adoption. Here we describe i-PI, an interface written in Python that has been designed to minimise the effort required to bring state-of-the-art path integral techniques to an electronic structure program. While it is best suited to first principles calculations and path integral molecular dynamics, i-PI can also be used to perform classical molecular dynamics simulations, and can just as easily be interfaced with an empirical forcefield code. To give just one example of the many potential applications of the interface, we use it in conjunction with the CP2K electronic structure package to showcase the importance of nuclear quantum effects in high-pressure water. Catalogue identifier: AERN_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AERN_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 138626 No. of bytes in distributed program, including test data, etc.: 3128618 Distribution format: tar.gz Programming language: Python. Computer: Multiple architectures. Operating system: Linux, Mac OSX, Windows. RAM: Less than 256 Mb Classification: 7.7. External routines: NumPy Nature of problem: Bringing the latest developments in the modelling of nuclear quantum effects with path integral molecular dynamics to ab initio electronic structure programs with minimal implementational effort. Solution method: State-of-the-art path integral molecular dynamics techniques are implemented in a Python interface. Any electronic structure code can be patched to receive the atomic coordinates from the Python interface, and to return the forces and energy that are used to integrate the equations of motion. Restrictions: This code only deals with distinguishable particles. It does not include fermonic or bosonic exchanges between equivalent nuclei, which can become important at very low temperatures. Running time: Depends dramatically on the nature of the simulation being performed. A few minutes for short tests with empirical force fields, up to several weeks for production calculations with ab initio forces. The examples provided with the code run in less than an hour.

  5. Path-integral theory of an axially confined worm-like chain

    NASA Astrophysics Data System (ADS)

    Smith, D. A.

    2001-06-01

    A path-integral formulation is developed for the thermodynamic properties of a worm-like chain moving on a surface and laterally confined by a harmonic potential. The free energy of the chain is calculated as a function of its length and boundary conditions at each end. Distribution functions for chain displacements can be constructed by utilizing the Markov property as a function of displacement φ(s) and its derivative dφ(s)/ds along the path. These quantities are also calculated in the presence of pinning sites which impose fixed positive or negative displacements, foreshadowing their application to a model for the regulation of striated muscle.

  6. Critique of Coleman's Theory of the Vanishing Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Susskind, Leonard

    In these lectures I would like to review some of the criticisms to the Coleman worm-hole theory of the vanishing cosmological constant. In particular, I would like to focus on the most fundamental assumption that the path integral over topologies defines a probability for the cosmological constant which has the form EXP(A) with A being the Baum-Hawking-Coleman saddle point. Coleman argues that the euclideam path integral over all geometries may be dominated by special configurations which consist of large smooth "spheres" connected by any number of narrow wormholes. Formally summing up such configurations gives a very divergent expression for the path integral…

  7. Triple-Pulsed Two-Micron Integrated Path Differential Absorption Lidar: A New Active Remote Sensing Capability with Path to Space

    NASA Technical Reports Server (NTRS)

    Singh, Upendra N.; Refaat, Tamer F.; Petros, Mulugeta; Yu, Jirong

    2015-01-01

    The two-micron wavelength is suitable for monitoring atmospheric water vapor and carbon dioxide, the two most dominant greenhouse gases. Recent advances in 2-micron laser technology paved the way for constructing state-of-the-art lidar transmitters for active remote sensing applications. In this paper, a new triple-pulsed 2-micron integrated path differential absorption lidar is presented. This lidar is capable of measuring either two species or single specie with two different weighting functions, simultaneously and independently. Development of this instrument is conducted at NASA Langley Research Center. Instrument scaling for projected future space missions will be discussed.

  8. EuPathDB: the eukaryotic pathogen genomics database resource

    PubMed Central

    Aurrecoechea, Cristina; Barreto, Ana; Basenko, Evelina Y.; Brestelli, John; Brunk, Brian P.; Cade, Shon; Crouch, Kathryn; Doherty, Ryan; Falke, Dave; Fischer, Steve; Gajria, Bindu; Harb, Omar S.; Heiges, Mark; Hertz-Fowler, Christiane; Hu, Sufen; Iodice, John; Kissinger, Jessica C.; Lawrence, Cris; Li, Wei; Pinney, Deborah F.; Pulman, Jane A.; Roos, David S.; Shanmugasundram, Achchuthan; Silva-Franco, Fatima; Steinbiss, Sascha; Stoeckert, Christian J.; Spruill, Drew; Wang, Haiming; Warrenfeltz, Susanne; Zheng, Jie

    2017-01-01

    The Eukaryotic Pathogen Genomics Database Resource (EuPathDB, http://eupathdb.org) is a collection of databases covering 170+ eukaryotic pathogens (protists & fungi), along with relevant free-living and non-pathogenic species, and select pathogen hosts. To facilitate the discovery of meaningful biological relationships, the databases couple preconfigured searches with visualization and analysis tools for comprehensive data mining via intuitive graphical interfaces and APIs. All data are analyzed with the same workflows, including creation of gene orthology profiles, so data are easily compared across data sets, data types and organisms. EuPathDB is updated with numerous new analysis tools, features, data sets and data types. New tools include GO, metabolic pathway and word enrichment analyses plus an online workspace for analysis of personal, non-public, large-scale data. Expanded data content is mostly genomic and functional genomic data while new data types include protein microarray, metabolic pathways, compounds, quantitative proteomics, copy number variation, and polysomal transcriptomics. New features include consistent categorization of searches, data sets and genome browser tracks; redesigned gene pages; effective integration of alternative transcripts; and a EuPathDB Galaxy instance for private analyses of a user's data. Forthcoming upgrades include user workspaces for private integration of data with existing EuPathDB data and improved integration and presentation of host–pathogen interactions. PMID:27903906

  9. Short-Path Statistics and the Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Blanco, Stéphane; Fournier, Richard

    2006-12-01

    In the field of first return time statistics in bounded domains, short paths may be defined as those paths for which the diffusion approximation is inappropriate. This is at the origin of numerous open questions concerning the characterization of residence time distributions. We show here how general integral constraints can be derived that make it possible to address short-path statistics indirectly by application of the diffusion approximation to long paths. Application to the moments of the distribution at the low-Knudsen limit leads to simple practical results and novel physical pictures.

  10. The Human Space Life Sciences Critical Path Roadmap Project: A Strategy for Human Space Flight through Exploration-Class Missions

    NASA Technical Reports Server (NTRS)

    Sawin, Charles F.

    1999-01-01

    The product of the critical path roadmap project is an integrated strategy for mitigating the risks associated with human exploration class missions. It is an evolving process that will assure the ability to communicate the integrated critical path roadmap. Unlike previous reports, this one will not sit on a shelf - it has the full support of the JSC Space and Life Sciences Directorate (SA) and is already being used as a decision making tool (e.g., budget and investigation planning for Shuttle and Space Station mission). Utility of this product depends on many efforts, namely: providing the required information (completed risk data sheets, critical question information, technology data). It is essential to communicate the results of the critical path roadmap to the scientific community - this meeting is a good opportunity to do so. The web site envisioned for the critical path roadmap will provide the capability to communicate to a broader community and to track and update the system routinely.

  11. Development of a new integrated local trajectory planning and tracking control framework for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Sun, Zhenping; Cao, Dongpu; Liu, Daxue; He, Hangen

    2017-03-01

    This study proposes a novel integrated local trajectory planning and tracking control (ILTPTC) framework for autonomous vehicles driving along a reference path with obstacles avoidance. For this ILTPTC framework, an efficient state-space sampling-based trajectory planning scheme is employed to smoothly follow the reference path. A model-based predictive path generation algorithm is applied to produce a set of smooth and kinematically-feasible paths connecting the initial state with the sampling terminal states. A velocity control law is then designed to assign a speed value at each of the points along the generated paths. An objective function considering both safety and comfort performance is carefully formulated for assessing the generated trajectories and selecting the optimal one. For accurately tracking the optimal trajectory while overcoming external disturbances and model uncertainties, a combined feedforward and feedback controller is developed. Both simulation analyses and vehicle testing are performed to verify the effectiveness of the proposed ILTPTC framework, and future research is also briefly discussed.

  12. From conformal blocks to path integrals in the Vaidya geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order tomore » correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.« less

  13. Integration of polarization-multiplexing and phase-shifting in nanometric two dimensional self-mixing measurement.

    PubMed

    Tao, Yufeng; Xia, Wei; Wang, Ming; Guo, Dongmei; Hao, Hui

    2017-02-06

    Integration of phase manipulation and polarization multiplexing was introduced to self-mixing interferometry (SMI) for high-sensitive measurement. Light polarizations were used to increase measuring path number and predict manifold merits for potential applications. Laser source was studied as a microwave-photonic resonator optically-injected by double reflected lights on a two-feedback-factor analytical model. Independent external paths exploited magnesium-oxide doped lithium niobate crystals at perpendicular polarizations to transfer interferometric phases into amplitudes of harmonics. Theoretical resolutions reached angstrom level. By integrating two techniques, this SMI outperformed the conventional single-path SMIs by simultaneous dual-targets measurement on single laser tube with high sensitivity and low speckle noise. In experimental demonstration, by nonlinear filtering method, a custom-made phase-resolved algorithm real-time figured out instantaneous two-dimensional displacements with nanometer resolution. Experimental comparisons to lock-in technique and a commercial Ploytec-5000 laser Doppler velocity meter validated this two-path SMI in micron range without optical cross-talk. Moreover, accuracy subjected to slewing rates of crystals could be flexibly adjusted.

  14. From conformal blocks to path integrals in the Vaidya geometry

    DOE PAGES

    Anous, Tarek; Hartman, Thomas; Rovai, Antonin; ...

    2017-09-04

    Correlators in conformal field theory are naturally organized as a sum over conformal blocks. In holographic theories, this sum must reorganize into a path integral over bulk fields and geometries. We explore how these two sums are related in the case of a point particle moving in the background of a 3d collapsing black hole. The conformal block expansion is recast as a sum over paths of the first-quantized particle moving in the bulk geometry. Off-shell worldlines of the particle correspond to subdominant contributions in the Euclidean conformal block expansion, but these same operators must be included in order tomore » correctly reproduce complex saddles in the Lorentzian theory. During thermalization, a complex saddle dominates under certain circumstances; in this case, the CFT correlator is not given by the Virasoro identity block in any channel, but can be recovered by summing heavy operators. This effectively converts the conformal block expansion in CFT from a sum over intermediate states to a sum over channels that mimics the bulk path integral.« less

  15. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  16. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    PubMed

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  17. Integration of Hierarchical Goal Network Planning and Autonomous Path Planning

    DTIC Science & Technology

    2016-03-01

    Conference on Robotics and Automation (ICRA); 2010 May 3– 7; Anchorage, AK. p. 2902–2908. 4. Ayan NF, Kuter U, Yaman F, Goldman RP. Hotride...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT Automated planning has...world robotic systems. This report documents work to integrate a hierarchical goal network planning algorithm with low-level path planning. The system

  18. Batalin-Vilkovisky quantization and generalizations

    NASA Astrophysics Data System (ADS)

    Bering, Klaus

    Gauge theories play an important role in modern physics. Whenever a gauge symmetry is present, one should provide for a manifestly gauge independent formalism. It turns out that the BRST symmetry plays a prominent part in providing the gauge independence. The importance of gauge independence in the Hamiltonian Batalin-Fradkin-Fradkina- Vilkovisky formalism and in the Lagrangian Batalin- Vilkovisky formalism is stressed. Parallels are drawn between the various theories. A Hamiltonian path integral that takes into account quantum ordering effects arising in the operator formalism, should be written with the help of the star- multiplication or the Moyal bracket. It is generally believed, that this leads to higher order quantum corrections in the corresponding Lagrangian path integral. A higher order Lagrangian path integral based on a nilpotent higher order odd Laplacian is proposed. A new gauge independence mechanism that adapts to the higher order formalism, and that by-passes the problem of constructing a BRST transformation of the path integral in the higher order case, is developed. The new gauge mechanism is closely related to the cohomology of the odd Laplacian operator. Various cohomology aspects of the odd Laplacian are investigated. Whereas for instance the role of the ghost-cohomology properties of the BFV-BRST charge has been emphasized by several authors, the cohomology of the odd Laplacian are in general not well known.

  19. Integrating cell on chip—Novel waveguide platform employing ultra-long optical paths

    NASA Astrophysics Data System (ADS)

    Fohrmann, Lena Simone; Sommer, Gerrit; Pitruzzello, Giampaolo; Krauss, Thomas F.; Petrov, Alexander Yu.; Eich, Manfred

    2017-09-01

    Optical waveguides are the most fundamental building blocks of integrated optical circuits. They are extremely well understood, yet there is still room for surprises. Here, we introduce a novel 2D waveguide platform which affords a strong interaction of the evanescent tail of a guided optical wave with an external medium while only employing a very small geometrical footprint. The key feature of the platform is its ability to integrate the ultra-long path lengths by combining low propagation losses in a silicon slab with multiple reflections of the guided wave from photonic crystal (PhC) mirrors. With a reflectivity of 99.1% of our tailored PhC-mirrors, we achieve interaction paths of 25 cm within an area of less than 10 mm2. This corresponds to 0.17 dB/cm effective propagation which is much lower than the state-of-the-art loss of approximately 1 dB/cm of single mode silicon channel waveguides. In contrast to conventional waveguides, our 2D-approach leads to a decay of the guided wave power only inversely proportional to the optical path length. This entirely different characteristic is the major advantage of the 2D integrating cell waveguide platform over the conventional channel waveguide concepts that obey the Beer-Lambert law.

  20. Optical Imaging and Radiometric Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge diffusion modulation transfer function (MTF).

  1. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  2. IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps

    DTIC Science & Technology

    2016-11-01

    personnel, career paths for program managers, plans to strengthen program management, and use of special hiring authorities) Monitor and report...agencies with direct hiring authority for program managers and directed OPM to create a specialized career path. OMB also tasked agencies with...guidance for developing career paths for IT program managers.14 OPM’s career path guide was to build upon its IT Program Management Competency Model

  3. Adaptive time-stepping Monte Carlo integration of Coulomb collisions

    NASA Astrophysics Data System (ADS)

    Särkimäki, K.; Hirvijoki, E.; Terävä, J.

    2018-01-01

    We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.

  4. Velocity Inversion In Cylindrical Couette Gas Flows

    NASA Astrophysics Data System (ADS)

    Dongari, Nishanth; Barber, Robert W.; Emerson, David R.; Zhang, Yonghao; Reese, Jason M.

    2012-05-01

    We investigate a power-law probability distribution function to describe the mean free path of rarefied gas molecules in non-planar geometries. A new curvature-dependent model is derived by taking into account the boundary-limiting effects on the molecular mean free path for surfaces with both convex and concave curvatures. In comparison to a planar wall, we find that the mean free path for a convex surface is higher at the wall and exhibits a sharper gradient within the Knudsen layer. In contrast, a concave wall exhibits a lower mean free path near the surface and the gradients in the Knudsen layer are shallower. The Navier-Stokes constitutive relations and velocity-slip boundary conditions are modified based on a power-law scaling to describe the mean free path, in accordance with the kinetic theory of gases, i.e. transport properties can be described in terms of the mean free path. Velocity profiles for isothermal cylindrical Couette flow are obtained using the power-law model. We demonstrate that our model is more accurate than the classical slip solution, especially in the transition regime, and we are able to capture important non-linear trends associated with the non-equilibrium physics of the Knudsen layer. In addition, we establish a new criterion for the critical accommodation coefficient that leads to the non-intuitive phenomena of velocity-inversion. Our results are compared with conventional hydrodynamic models and direct simulation Monte Carlo data. The power-law model predicts that the critical accommodation coefficient is significantly lower than that calculated using the classical slip solution and is in good agreement with available DSMC data. Our proposed constitutive scaling for non-planar surfaces is based on simple physical arguments and can be readily implemented in conventional fluid dynamics codes for arbitrary geometric configurations.

  5. Montelukast photodegradation: elucidation of Ф-order kinetics, determination of quantum yields and application to actinometry.

    PubMed

    Maafi, Mounir; Maafi, Wassila

    2014-08-25

    A recently developed Ф-order semi-emperical integrated rate-law for photoreversible AB(2Ф) reactions has been successfully applied to investigate Montelukast sodium (Monte) photodegradation kinetics in ethanol. The model equations also served to propose a new stepwise kinetic elucidation method valid for any AB(2Ф) system and its application to the determination of Monte's forward (Ф(λ(irr))(A-->B)) and reverse (Ф(λ(irr))(B-->A)) quantum yields at various irradiation wavelengths. It has been found that Ф(λ(irr))(A-->B) undergoes a 15-fold increase with wavelength between 220 and 360 nm, with the spectral section 250-360 nm representing Monte effective photodegradation causative range. The reverse quantum yield values were generally between 12 and 54% lower than those recorded for Ф(λ(irr))(A-->B), with the trans-isomer (Monte) converting almost completely to its cis-counterpart at high irradiation wavelengths. Furthermore, the potential use of Monte as an actinometer has been investigated, and an actinometric method was proposed. This study demonstrated the usefulness of Monte for monochromatic light actinometry for the dynamic range 258-380 nm. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Regression-based model of skin diffuse reflectance for skin color analysis

    NASA Astrophysics Data System (ADS)

    Tsumura, Norimichi; Kawazoe, Daisuke; Nakaguchi, Toshiya; Ojima, Nobutoshi; Miyake, Yoichi

    2008-11-01

    A simple regression-based model of skin diffuse reflectance is developed based on reflectance samples calculated by Monte Carlo simulation of light transport in a two-layered skin model. This reflectance model includes the values of spectral reflectance in the visible spectra for Japanese women. The modified Lambert Beer law holds in the proposed model with a modified mean free path length in non-linear density space. The averaged RMS and maximum errors of the proposed model were 1.1 and 3.1%, respectively, in the above range.

  7. Mapping the q-voter model: From a single chain to complex networks

    NASA Astrophysics Data System (ADS)

    Jȩdrzejewski, Arkadiusz; Sznajd-Weron, Katarzyna; Szwabiński, Janusz

    2016-03-01

    We propose and compare six different ways of mapping the modified q-voter model to complex networks. Considering square lattices, Barabási-Albert, Watts-Strogatz and real Twitter networks, we ask the question if always a particular choice of the group of influence of a fixed size q leads to different behavior at the macroscopic level. Using Monte Carlo simulations we show that the answer depends on the relative average path length of the network and for real-life topologies the differences between the considered mappings may be negligible.

  8. Importance biasing scheme implemented in the PRIZMA code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-12-31

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities.

  9. Cyclotron line resonant transfer through neutron star atmospheres

    NASA Technical Reports Server (NTRS)

    Wang, John C. L.; Wasserman, Ira M.; Salpeter, Edwin E.

    1988-01-01

    Monte Carlo methods are used to study in detail the resonant radiative transfer of cyclotron line photons with recoil through a purely scattering neutron star atmosphere for both the polarized and unpolarized cases. For each case, the number of scatters, the path length traveled, the escape frequency shift, the escape direction cosine, the emergent frequency spectra, and the angular distribution of escaping photons are investigated. In the polarized case, transfer is calculated using both the cold plasma e- and o-modes and the magnetic vacuum perpendicular and parallel modes.

  10. General Path-Integral Successive-Collision Solution of the Bounded Dynamic Multi-Swarm Problem.

    DTIC Science & Technology

    1983-09-23

    coefficients (i.e., moments of the distribution functions), and/or (il) fnding the distribution functions themselves. The present work is concerned with the...collisions since their first appearance in the system. By definition, a swarm particle sufers a *generalized collision" either when it collides with a...studies6-rand the present work have contributed to- wards making the path-integral successive-collision method a practicable tool of transport theory

  11. Spin foam models for quantum gravity from lattice path integrals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonzom, Valentin

    2009-09-15

    Spin foam models for quantum gravity are derived from lattice path integrals. The setting involves variables from both lattice BF theory and Regge calculus. The action consists in a Regge action, which depends on areas, dihedral angles and includes the Immirzi parameter. In addition, a measure is inserted to ensure a consistent gluing of simplices, so that the amplitude is dominated by configurations that satisfy the parallel transport relations. We explicitly compute the path integral as a sum over spin foams for a generic measure. The Freidel-Krasnov and Engle-Pereira-Rovelli models correspond to a special choice of gluing. In this case,more » the equations of motion describe genuine geometries, where the constraints of area-angle Regge calculus are satisfied. Furthermore, the Immirzi parameter drops out of the on-shell action, and stationarity with respect to area variations requires spacetime geometry to be flat.« less

  12. Qualitative Evaluation of Project P.A.T.H.S.: An Integration of Findings Based on Program Participants

    PubMed Central

    Shek, Daniel T. L.; Sun, Rachel C. F.

    2012-01-01

    An integration of the qualitative evaluation findings collected in different cohorts of students who participated in Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) (n = 252 students in 29 focus groups) was carried out. With specific focus on how the informants described the program, results showed that the descriptions were mainly positive in nature, suggesting that the program was well received by the program participants. When the informants were invited to name three metaphors that could stand for the program, positive metaphors were commonly used. Beneficial effects of the program in different psychosocial domains were also voiced by the program participants. The qualitative findings integrated in this paper provide further support for the effectiveness of the Tier 1 Program of Project P.A.T.H.S. in promoting holistic development in Chinese adolescents in Hong Kong. PMID:22666134

  13. An accurate European option pricing model under Fractional Stable Process based on Feynman Path Integral

    NASA Astrophysics Data System (ADS)

    Ma, Chao; Ma, Qinghua; Yao, Haixiang; Hou, Tiancheng

    2018-03-01

    In this paper, we propose to use the Fractional Stable Process (FSP) for option pricing. The FSP is one of the few candidates to directly model a number of desired empirical properties of asset price risk neutral dynamics. However, pricing the vanilla European option under FSP is difficult and problematic. In the paper, built upon the developed Feynman Path Integral inspired techniques, we present a novel computational model for option pricing, i.e. the Fractional Stable Process Path Integral (FSPPI) model under a general fractional stable distribution that tackles this problem. Numerical and empirical experiments show that the proposed pricing model provides a correction of the Black-Scholes pricing error - overpricing long term options, underpricing short term options; overpricing out-of-the-money options, underpricing in-the-money options without any additional structures such as stochastic volatility and a jump process.

  14. High-density amorphous ice: A path-integral simulation

    NASA Astrophysics Data System (ADS)

    Herrero, Carlos P.; Ramírez, Rafael

    2012-09-01

    Structural and thermodynamic properties of high-density amorphous (HDA) ice have been studied by path-integral molecular dynamics simulations in the isothermal-isobaric ensemble. Interatomic interactions were modeled by using the effective q-TIP4P/F potential for flexible water. Quantum nuclear motion is found to affect several observable properties of the amorphous solid. At low temperature (T = 50 K) the molar volume of HDA ice is found to increase by 6%, and the intramolecular O-H distance rises by 1.4% due to quantum motion. Peaks in the radial distribution function of HDA ice are broadened with respect to their classical expectancy. The bulk modulus, B, is found to rise linearly with the pressure, with a slope ∂B/∂P = 7.1. Our results are compared with those derived earlier from classical and path-integral simulations of HDA ice. We discuss similarities and discrepancies with those earlier simulations.

  15. ER = EPR and non-perturbative action integrals for quantum gravity

    NASA Astrophysics Data System (ADS)

    Alsaleh, Salwa; Alasfar, Lina

    In this paper, we construct and calculate non-perturbative path integrals in a multiply-connected spacetime. This is done by summing over homotopy classes of paths. The topology of the spacetime is defined by Einstein-Rosen bridges (ERB) forming from the entanglement of quantum foam described by virtual black holes. As these “bubbles” are entangled, they are connected by Planckian ERBs because of the ER = EPR conjecture. Hence, the spacetime will possess a large first Betti number B1. For any compact 2-surface in the spacetime, the topology (in particular the homotopy) of that surface is non-trivial due to the large number of Planckian ERBs that define homotopy through this surface. The quantization of spacetime with this topology — along with the proper choice of the 2-surfaces — is conjectured to allow non-perturbative path integrals of quantum gravity theory over the spacetime manifold.

  16. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION.

    PubMed

    Wang, Lan; Kim, Yongdai; Li, Runze

    2013-10-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis.

  17. CALIBRATING NON-CONVEX PENALIZED REGRESSION IN ULTRA-HIGH DIMENSION

    PubMed Central

    Wang, Lan; Kim, Yongdai; Li, Runze

    2014-01-01

    We investigate high-dimensional non-convex penalized regression, where the number of covariates may grow at an exponential rate. Although recent asymptotic theory established that there exists a local minimum possessing the oracle property under general conditions, it is still largely an open problem how to identify the oracle estimator among potentially multiple local minima. There are two main obstacles: (1) due to the presence of multiple minima, the solution path is nonunique and is not guaranteed to contain the oracle estimator; (2) even if a solution path is known to contain the oracle estimator, the optimal tuning parameter depends on many unknown factors and is hard to estimate. To address these two challenging issues, we first prove that an easy-to-calculate calibrated CCCP algorithm produces a consistent solution path which contains the oracle estimator with probability approaching one. Furthermore, we propose a high-dimensional BIC criterion and show that it can be applied to the solution path to select the optimal tuning parameter which asymptotically identifies the oracle estimator. The theory for a general class of non-convex penalties in the ultra-high dimensional setup is established when the random errors follow the sub-Gaussian distribution. Monte Carlo studies confirm that the calibrated CCCP algorithm combined with the proposed high-dimensional BIC has desirable performance in identifying the underlying sparsity pattern for high-dimensional data analysis. PMID:24948843

  18. Cartographic modeling of snow avalanche path location within Glacier National Park, Montana

    NASA Technical Reports Server (NTRS)

    Walsh, Stephen J.; Brown, Daniel G.; Bian, Ling; Butler, David R.

    1990-01-01

    Geographic information system (GIS) techniques were applied to the study of snow-avalanche path location within Glacier National Park, Montana. Aerial photointerpretation and field surveys confirmed the location of 121 avalanche paths within the selected study area. Spatial and nonspatial information on each path were integrated using the ARC/INFO GIS. Lithologic, structural, hydrographic, topographic, and land-cover impacts on path location were analyzed. All path frequencies within variable classes were normalized by the area of class occurrence relative to the total area of the study area and were added to the morphometric information contained within INFO tables. The normalized values for each GIS coverage were used to cartographically model, by means of composite factor weightings, avalanche path locations.

  19. Effect of an overhead shield on gamma-ray skyshine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stedry, M.H.; Shultis, J.K.; Faw, R.E.

    1996-06-01

    A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less

  20. Does sheep selectivity along grazing paths negatively affect biological crusts and soil seed banks in arid shrublands? A case study in the Patagonian Monte, Argentina.

    PubMed

    Bertiller, M B; Ares, J O

    2011-08-01

    Domestic animals potentially affect the reproductive output of plants by direct removal of aboveground plant parts but also could alter the structure and fertility of the upper soil and the integrity of biological crusts through trampling. We asked whether sheep selectivity of plant patches along grazing paths could lead to negative changes in biological crusts and soil seed banks. We randomly selected ten floristically homogeneous vegetation stands distributed across an area (1250 ha) grazed by free ranging sheep. Vegetation stands were differently selected by sheep as estimated through sheep-collaring techniques combined with remote imagery mapping. At each stand, we extracted 15 paired cylindrical soil cores from biological crusts and from neighboring soil without crusts. We evaluated the crust cover enclosed in each core and incubated the soil samples at field capacity at alternating 10-18 °C during 24 months. We counted the emerged seedlings and identified them by species. Sheep selectivity along grazing paths was largest at mid-distances to the watering point of the paddock. Increasing sheep selectivity was associated with the reduction of the cover of biological crusts and the size and species number of the soil seed bank of preferred perennial grasses under biological crusts. The size of the soil seed bank of annual grasses was reduced with increasing sheep selectivity under both crust and no crust soil conditions. We did not detect changes in the soil seed banks of less- and non- preferred species (shrubs and forbs) related to sheep selectivity. Our findings highlight the negative effects of sheep selectivity on biological crusts and the soil seed bank of preferred plant species and the positive relationship between biological crusts and the size of the soil seed bank of perennial grasses. Accordingly, the state of conservation of biological crusts could be useful to assess the state of the soil seed banks of perennial grasses for monitoring, conservation and planning the sustainable management of grazing lands. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Stereo Image Dense Matching by Integrating Sift and Sgm Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Song, Y.; Lu, J.

    2018-05-01

    Semi-global matching(SGM) performs the dynamic programming by treating the different path directions equally. It does not consider the impact of different path directions on cost aggregation, and with the expansion of the disparity search range, the accuracy and efficiency of the algorithm drastically decrease. This paper presents a dense matching algorithm by integrating SIFT and SGM. It takes the successful matching pairs matched by SIFT as control points to direct the path in dynamic programming with truncating error propagation. Besides, matching accuracy can be improved by using the gradient direction of the detected feature points to modify the weights of the paths in different directions. The experimental results based on Middlebury stereo data sets and CE-3 lunar data sets demonstrate that the proposed algorithm can effectively cut off the error propagation, reduce disparity search range and improve matching accuracy.

  2. Preserving correlations between trajectories for efficient path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingrich, Todd R.; Geissler, Phillip L.; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-06-21

    Importance sampling of trajectories has proved a uniquely successful strategy for exploring rare dynamical behaviors of complex systems in an unbiased way. Carrying out this sampling, however, requires an ability to propose changes to dynamical pathways that are substantial, yet sufficiently modest to obtain reasonable acceptance rates. Satisfying this requirement becomes very challenging in the case of long trajectories, due to the characteristic divergences of chaotic dynamics. Here, we examine schemes for addressing this problem, which engineer correlation between a trial trajectory and its reference path, for instance using artificial forces. Our analysis is facilitated by a modern perspective onmore » Markov chain Monte Carlo sampling, inspired by non-equilibrium statistical mechanics, which clarifies the types of sampling strategies that can scale to long trajectories. Viewed in this light, the most promising such strategy guides a trial trajectory by manipulating the sequence of random numbers that advance its stochastic time evolution, as done in a handful of existing methods. In cases where this “noise guidance” synchronizes trajectories effectively, as the Glauber dynamics of a two-dimensional Ising model, we show that efficient path sampling can be achieved for even very long trajectories.« less

  3. Improved transition path sampling methods for simulation of rare events

    NASA Astrophysics Data System (ADS)

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.

    2008-04-01

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  4. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  5. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  6. Mission Operations and Navigation Toolkit Environment

    NASA Technical Reports Server (NTRS)

    Sunseri, Richard F.; Wu, Hsi-Cheng; Hanna, Robert A.; Mossey, Michael P.; Duncan, Courtney B.; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.; Martin Mur, Tomas J.; hide

    2009-01-01

    MONTE (Mission Operations and Navigation Toolkit Environment) Release 7.3 is an extensible software system designed to support trajectory and navigation analysis/design for space missions. MONTE is intended to replace the current navigation and trajectory analysis software systems, which, at the time of this reporting, are used by JPL's Navigation and Mission Design section. The software provides an integrated, simplified, and flexible system that can be easily maintained to serve the needs of future missions in need of navigation services.

  7. Quantum Monte Carlo tunneling from quantum chemistry to quantum annealing

    NASA Astrophysics Data System (ADS)

    Mazzola, Guglielmo; Smelyanskiy, Vadim N.; Troyer, Matthias

    2017-10-01

    Quantum tunneling is ubiquitous across different fields, from quantum chemical reactions and magnetic materials to quantum simulators and quantum computers. While simulating the real-time quantum dynamics of tunneling is infeasible for high-dimensional systems, quantum tunneling also shows up in quantum Monte Carlo (QMC) simulations, which aim to simulate quantum statistics with resources growing only polynomially with the system size. Here we extend the recent results obtained for quantum spin models [Phys. Rev. Lett. 117, 180402 (2016), 10.1103/PhysRevLett.117.180402], and we study continuous-variable models for proton transfer reactions. We demonstrate that QMC simulations efficiently recover the scaling of ground-state tunneling rates due to the existence of an instanton path, which always connects the reactant state with the product. We discuss the implications of our results in the context of quantum chemical reactions and quantum annealing, where quantum tunneling is expected to be a valuable resource for solving combinatorial optimization problems.

  8. CTRANS: A Monte Carlo program for radiative transfer in plane parallel atmospheres with imbedded finite clouds: Development, testing and user's guide

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The program called CTRANS is described which was designed to perform radiative transfer computations in an atmosphere with horizontal inhomogeneities (clouds). Since the atmosphere-ground system was to be richly detailed, the Monte Carlo method was employed. This means that results are obtained through direct modeling of the physical process of radiative transport. The effects of atmopheric or ground albedo pattern detail are essentially built up from their impact upon the transport of individual photons. The CTRANS program actually tracks the photons backwards through the atmosphere, initiating them at a receiver and following them backwards along their path to the Sun. The pattern of incident photons generated through backwards tracking automatically reflects the importance to the receiver of each region of the sky. Further, through backwards tracking, the impact of the finite field of view of the receiver and variations in its response over the field of view can be directly simulated.

  9. An examination of the sensitivity and systematic error of the NASA GEMS Bragg Reflection Polarimeter using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Allured, Ryan; Okajima, Takashi; Soufli, Regina; Fernández-Perea, Mónica; Daly, Ryan O.; Marlowe, Hannah; Griffiths, Scott T.; Pivovaroff, Michael J.; Kaaret, Philip

    2012-10-01

    The Bragg Reflection Polarimeter (BRP) on the NASA Gravity and Extreme Magnetism Small Explorer Mission is designed to measure the linear polarization of astrophysical sources in a narrow band centered at about 500 eV. X-rays are focused by Wolter I mirrors through a 4.5 m focal length to a time projection chamber (TPC) polarimeter, sensitive between 2{10 keV. In this optical path lies the BRP multilayer reflector at a nominal 45 degree incidence angle. The reflector reflects soft X-rays to the BRP detector and transmits hard X-rays to the TPC. As the spacecraft rotates about the optical axis, the reflected count rate will vary depending on the polarization of the incident beam. However, false polarization signals may be produced due to misalignments and spacecraft pointing wobble. Monte-Carlo simulations have been carried out, showing that the false modulation is below the statistical uncertainties for the expected focal plane offsets of < 2 mm.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikvold, Per Arne; Brown, Gregory; Miyashita, Seiji

    Phase diagrams and hysteresis loops were obtained by Monte Carlo simulations and a mean- field method for a simplified model of a spin-crossovermaterialwith a two-step transition between the high-spin and low-spin states. This model is a mapping onto a square-lattice S = 1/2 Ising model with antiferromagnetic nearest-neighbor and ferromagnetic Husimi-Temperley ( equivalent-neighbor) long-range interactions. Phase diagrams obtained by the two methods for weak and strong long-range interactions are found to be similar. However, for intermediate-strength long-range interactions, the Monte Carlo simulations show that tricritical points decompose into pairs of critical end points and mean-field critical points surrounded by horn-shapedmore » regions of metastability. Hysteresis loops along paths traversing the horn regions are strongly reminiscent of thermal two-step transition loops with hysteresis, recently observed experimentally in several spin-crossover materials. As a result, we believe analogous phenomena should be observable in experiments and simulations for many systems that exhibit competition between local antiferromagnetic-like interactions and long-range ferromagnetic-like interactions caused by elastic distortions.« less

  11. Generalizing the self-healing diffusion Monte Carlo approach to finite temperature: a path for the optimization of low-energy many-body basis expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jeongnim; Reboredo, Fernando A.

    The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systemsmore » near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.« less

  12. IMRT head and neck treatment planning with a commercially available Monte Carlo based planning system

    NASA Astrophysics Data System (ADS)

    Boudreau, C.; Heath, E.; Seuntjens, J.; Ballivy, O.; Parker, W.

    2005-03-01

    The PEREGRINE Monte Carlo dose-calculation system (North American Scientific, Cranberry Township, PA) is the first commercially available Monte Carlo dose-calculation code intended specifically for intensity modulated radiotherapy (IMRT) treatment planning and quality assurance. In order to assess the impact of Monte Carlo based dose calculations for IMRT clinical cases, dose distributions for 11 head and neck patients were evaluated using both PEREGRINE and the CORVUS (North American Scientific, Cranberry Township, PA) finite size pencil beam (FSPB) algorithm with equivalent path-length (EPL) inhomogeneity correction. For the target volumes, PEREGRINE calculations predict, on average, a less than 2% difference in the calculated mean and maximum doses to the gross tumour volume (GTV) and clinical target volume (CTV). An average 16% ± 4% and 12% ± 2% reduction in the volume covered by the prescription isodose line was observed for the GTV and CTV, respectively. Overall, no significant differences were noted in the doses to the mandible and spinal cord. For the parotid glands, PEREGRINE predicted a 6% ± 1% increase in the volume of tissue receiving a dose greater than 25 Gy and an increase of 4% ± 1% in the mean dose. Similar results were noted for the brainstem where PEREGRINE predicted a 6% ± 2% increase in the mean dose. The observed differences between the PEREGRINE and CORVUS calculated dose distributions are attributed to secondary electron fluence perturbations, which are not modelled by the EPL correction, issues of organ outlining, particularly in the vicinity of air cavities, and differences in dose reporting (dose to water versus dose to tissue type).

  13. PathEdEx – Uncovering High-explanatory Visual Diagnostics Heuristics Using Digital Pathology and Multiscale Gaze Data

    PubMed Central

    Shin, Dmitriy; Kovalenko, Mikhail; Ersoy, Ilker; Li, Yu; Doll, Donald; Shyu, Chi-Ren; Hammer, Richard

    2017-01-01

    Background: Visual heuristics of pathology diagnosis is a largely unexplored area where reported studies only provided a qualitative insight into the subject. Uncovering and quantifying pathology visual and nonvisual diagnostic patterns have great potential to improve clinical outcomes and avoid diagnostic pitfalls. Methods: Here, we present PathEdEx, an informatics computational framework that incorporates whole-slide digital pathology imaging with multiscale gaze-tracking technology to create web-based interactive pathology educational atlases and to datamine visual and nonvisual diagnostic heuristics. Results: We demonstrate the capabilities of PathEdEx for mining visual and nonvisual diagnostic heuristics using the first PathEdEx volume of a hematopathology atlas. We conducted a quantitative study on the time dynamics of zooming and panning operations utilized by experts and novices to come to the correct diagnosis. We then performed association rule mining to determine sets of diagnostic factors that consistently result in a correct diagnosis, and studied differences in diagnostic strategies across different levels of pathology expertise using Markov chain (MC) modeling and MC Monte Carlo simulations. To perform these studies, we translated raw gaze points to high-explanatory semantic labels that represent pathology diagnostic clues. Therefore, the outcome of these studies is readily transformed into narrative descriptors for direct use in pathology education and practice. Conclusion: PathEdEx framework can be used to capture best practices of pathology visual and nonvisual diagnostic heuristics that can be passed over to the next generation of pathologists and have potential to streamline implementation of precision diagnostics in precision medicine settings. PMID:28828200

  14. Error rate performance of atmospheric laser communication based on bubble model

    NASA Astrophysics Data System (ADS)

    Xu, Ke; Wang, Jin; Li, Yan

    2009-08-01

    Free-Space Optics (FSO) can provide an effective line-of-sight and wireless communication with high bandwidth over a short distance. As a promising field of wireless communication, FSO is being accepted as an alternative of the more expensive fiber-optic based solutions. Despite the advantages of FSO, atmospheric turbulence has a significant impact on laser beam propagating through the channel in the atmosphere over a long distance. Turbulent eddies of various size and refractive index result in intensity scintillation and phase wander, which can severely impair the quality of FSO communication system. In this paper, a new geometrical model is used to assess the effects of turbulence on laser beam in its propagation path. The atmosphere is modeled along the transmission path filled with spatial-distributed spherical bubbles. The size and refractive index discontinuity of each bubble is K-distributed. This Monte Carlo technique allows us to estimate the fluctuation of intensity and phase shifts along the path. A pair of uncollimated rays arrives at the receiver through different path, and an optical path difference is produced. This difference causes a delay between the two rays. At the receiver, as the two rays are superposed, the delay ultimately affects the judgement of the bits. In the simulation, we assume that when the delay exceeds half of the bit width, bit error is possible. On the contrary, when the delay is less than the bit width, the bit error will not happen. Based on this assumption, we calculate the BER under different conditions, and results are further analyzed.

  15. PathEdEx - Uncovering High-explanatory Visual Diagnostics Heuristics Using Digital Pathology and Multiscale Gaze Data.

    PubMed

    Shin, Dmitriy; Kovalenko, Mikhail; Ersoy, Ilker; Li, Yu; Doll, Donald; Shyu, Chi-Ren; Hammer, Richard

    2017-01-01

    Visual heuristics of pathology diagnosis is a largely unexplored area where reported studies only provided a qualitative insight into the subject. Uncovering and quantifying pathology visual and nonvisual diagnostic patterns have great potential to improve clinical outcomes and avoid diagnostic pitfalls. Here, we present PathEdEx, an informatics computational framework that incorporates whole-slide digital pathology imaging with multiscale gaze-tracking technology to create web-based interactive pathology educational atlases and to datamine visual and nonvisual diagnostic heuristics. We demonstrate the capabilities of PathEdEx for mining visual and nonvisual diagnostic heuristics using the first PathEdEx volume of a hematopathology atlas. We conducted a quantitative study on the time dynamics of zooming and panning operations utilized by experts and novices to come to the correct diagnosis. We then performed association rule mining to determine sets of diagnostic factors that consistently result in a correct diagnosis, and studied differences in diagnostic strategies across different levels of pathology expertise using Markov chain (MC) modeling and MC Monte Carlo simulations. To perform these studies, we translated raw gaze points to high-explanatory semantic labels that represent pathology diagnostic clues. Therefore, the outcome of these studies is readily transformed into narrative descriptors for direct use in pathology education and practice. PathEdEx framework can be used to capture best practices of pathology visual and nonvisual diagnostic heuristics that can be passed over to the next generation of pathologists and have potential to streamline implementation of precision diagnostics in precision medicine settings.

  16. Continuous quantum measurements and the action uncertainty principle

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  17. Career Paths in Environmental Sciences

    EPA Science Inventory

    Career paths, current and future, in the environmental sciences will be discussed, based on experiences and observations during the author's 40 + years in the field. An emphasis will be placed on the need for integrated, transdisciplinary systems thinking approaches toward achie...

  18. 3,4-Methylenedioxymethamphetamine in Adult Rats Produces Deficits in Path Integration and Spatial Reference Memory

    PubMed Central

    Able, Jessica A.; Gudelsky, Gary A.; Vorhees, Charles V.; Williams, Michael T.

    2010-01-01

    Background ±3,4-Methylenedioxymethamphetamine (MDMA) is a recreational drug that causes cognitive deficits in humans. A rat model for learning and memory deficits has not been established, although some cognitive deficits have been reported. Methods Male Sprague-Dawley rats were treated with MDMA (15 mg/kg × 4 doses) or saline (SAL) (n = 20/treatment group) and tested in different learning paradigms: 1) path integration in the Cincinnati water maze (CWM), 2) spatial learning in the Morris water maze (MWM), and 3) novel object recognition (NOR). One week after drug administration, testing began in the CWM, then four phases of MWM, and finally NOR. Following behavioral testing, monoamine levels were assessed. Results ±3,4-Methylenedioxymethamphetamine-treated rats committed more CWM errors than did SAL-treated rats. ±3,4-Methylenedioxymethamphetamine-treated animals were further from the former platform position during each 30-second MWM probe trial but showed no differences during learning trials with the platform present. There were no group differences in NOR. ± 3,4-Methylenedioxymethamphetamine depleted serotonin in all brain regions and dopamine in the striatum. Conclusions ±3,4-Methylenedioxymethamphetamine produced MWM reference memory deficits even after complex learning in the CWM, where deficits in path integration learning occurred. Assessment of path integration may provide a sensitive index of MDMA-induced learning deficits. PMID:16324685

  19. Quantization of simple parametrized systems

    NASA Astrophysics Data System (ADS)

    Ruffini, G.

    2005-11-01

    I study the canonical formulation and quantization of some simple parametrized systems, including the non-relativistic parametrized particle and the relativistic parametrized particle. Using Dirac's formalism I construct for each case the classical reduced phase space and study the dependence on the gauge fixing used. Two separate features of these systems can make this construction difficult: the actions are not invariant at the boundaries, and the constraints may have disconnected solution spaces. The relativistic particle is affected by both, while the non-relativistic particle displays only by the first. Analyzing the role of canonical transformations in the reduced phase space, I show that a change of gauge fixing is equivalent to a canonical transformation. In the relativistic case, quantization of one branch of the constraint at the time is applied and I analyze the electromagenetic backgrounds in which it is possible to quantize simultaneously both branches and still obtain a covariant unitary quantum theory. To preserve unitarity and space-time covariance, second quantization is needed unless there is no electric field. I motivate a definition of the inner product in all these cases and derive the Klein-Gordon inner product for the relativistic case. I construct phase space path integral representations for amplitudes for the BFV and the Faddeev path integrals, from which the path integrals in coordinate space (Faddeev-Popov and geometric path integrals) are derived.

  20. A path integral approach to the Hodgkin-Huxley model

    NASA Astrophysics Data System (ADS)

    Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando

    2017-11-01

    To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.

Top