Science.gov

Sample records for path probability method

  1. Dynamic phase diagrams of a ferrimagnetic mixed spin (1/2, 1) Ising system within the path probability method

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-11-01

    In this study we used the path probability method (PPM) to calculate the dynamic phase diagrams of a ferrimagnetic mixed spin-(1/2, 1) Ising system under an oscillating magnetic field. One of the main advantages of the PPM over the mean-field approximation and the effective-field theory based on Glauber-type stochastic dynamics is that it contains two rate constants which are very important for studying dynamic behaviors. We present the dynamic phase diagrams in the reduced magnetic field amplitude and reduced temperature plane and the twelve main different topological types of the phase diagrams are obtained. The phase diagrams contain paramagnetic (p), ferrimagnetic (i) and i + p mixed phases. They also exhibit a dynamic tricritical and reentrant behavior as well as the dynamic double critical end point (B), critical end point (E), quadruple point (QP) and triple point (TP). The dynamic phase diagrams are compared and discussed with the phase diagrams obtained in previous works within the mean-field approximation and the effective-field theory based on Glauber-type stochastic dynamics.

  2. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  3. Pattern formation, logistics, and maximum path probability

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  4. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: Higher order theory based on the Bethe-Peierls and path probability method approximations

    SciTech Connect

    Edison, John R.; Monson, Peter A.

    2014-07-14

    Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.

  5. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the

  6. The Path-of-Probability Algorithm for Steering and Feedback Control of Flexible Needles

    PubMed Central

    Park, Wooram; Wang, Yunfeng; Chirikjian, Gregory S.

    2010-01-01

    In this paper we develop a new framework for path planning of flexible needles with bevel tips. Based on a stochastic model of needle steering, the probability density function for the needle tip pose is approximated as a Gaussian. The means and covariances are estimated using an error propagation algorithm which has second order accuracy. Then we adapt the path-of-probability (POP) algorithm to path planning of flexible needles with bevel tips. We demonstrate how our planning algorithm can be used for feedback control of flexible needles. We also derive a closed-form solution for the port placement problem for finding good insertion locations for flexible needles in the case when there are no obstacles. Furthermore, we propose a new method using reference splines with the POP algorithm to solve the path planning problem for flexible needles in more general cases that include obstacles. PMID:21151708

  7. Most probable paths in temporal weighted networks: An application to ocean transport

    NASA Astrophysics Data System (ADS)

    Ser-Giacomi, Enrico; Vasile, Ruggero; Hernández-García, Emilio; López, Cristóbal

    2015-07-01

    We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.

  8. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  9. Zero Density of Open Paths in the Lorentz Mirror Model for Arbitrary Mirror Probability

    NASA Astrophysics Data System (ADS)

    Kraemer, Atahualpa S.; Sanders, David P.

    2014-09-01

    We show, incorporating results obtained from numerical simulations, that in the Lorentz mirror model, the density of open paths in any finite box tends to 0 as the box size tends to infinity, for any mirror probability.

  10. Perturbative Methods in Path Integration

    NASA Astrophysics Data System (ADS)

    Johnson-Freyd, Theodore Paul

    This dissertation addresses a number of related questions concerning perturbative "path" integrals. Perturbative methods are one of the few successful ways physicists have worked with (or even defined) these infinite-dimensional integrals, and it is important as mathematicians to check that they are correct. Chapter 0 provides a detailed introduction. We take a classical approach to path integrals in Chapter 1. Following standard arguments, we posit a Feynman-diagrammatic description of the asymptotics of the time-evolution operator for the quantum mechanics of a charged particle moving nonrelativistically through a curved manifold under the influence of an external electromagnetic field. We check that our sum of Feynman diagrams has all desired properties: it is coordinate-independent and well-defined without ultraviolet divergences, it satisfies the correct composition law, and it satisfies Schrodinger's equation thought of as a boundary-value problem in PDE. Path integrals in quantum mechanics and elsewhere in quantum field theory are almost always of the shape ∫ f es for some functions f (the "observable") and s (the "action"). In Chapter 2 we step back to analyze integrals of this type more generally. Integration by parts provides algebraic relations between the values of ∫ (-) es for different inputs, which can be packaged into a Batalin--Vilkovisky-type chain complex. Using some simple homological perturbation theory, we study the version of this complex that arises when f and s are taken to be polynomial functions, and power series are banished. We find that in such cases, the entire scheme-theoretic critical locus (complex points included) of s plays an important role, and that one can uniformly (but noncanonically) integrate out in a purely algebraic way the contributions to the integral from all "higher modes," reducing ∫ f es to an integral over the critical locus. This may help explain the presence of analytic continuation in questions like the

  11. Continuity equation for probability as a requirement of inference over paths

    NASA Astrophysics Data System (ADS)

    González, Diego; Díaz, Daniela; Davis, Sergio

    2016-09-01

    Local conservation of probability, expressed as the continuity equation, is a central feature of non-equilibrium Statistical Mechanics. In the existing literature, the continuity equation is always motivated by heuristic arguments with no derivation from first principles. In this work we show that the continuity equation is a logical consequence of the laws of probability and the application of the formalism of inference over paths for dynamical systems. That is, the simple postulate that a system moves continuously through time following paths implies the continuity equation. The translation between the language of dynamical paths to the usual representation in terms of probability densities of states is performed by means of an identity derived from Bayes' theorem. The formalism presented here is valid independently of the nature of the system studied: it is applicable to physical systems and also to more abstract dynamics such as financial indicators, population dynamics in ecology among others.

  12. Computational methods for probability of instability calculations

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  13. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  14. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  15. Escape probability of geminate electron--ion recombination in the limit of large electron mean free path

    SciTech Connect

    Tachiya, M.; Schmidt, W.F.

    1989-02-15

    We investigated theoretically the geminate electron--ion recombination in the limit of large electron mean free path by using the concept of diffusion in energy space. The energy diffusion equation is derived and the energy diffusion coefficient is evaluated. An analytical expression for the escape probability of the electron--ion pair is derived. It reproduces the numerical results very well which were obtained previously by using a Monte Carlo method. Experimental implications of the present theoretical findings are discussed.

  16. Extending the application of critical path methods.

    PubMed

    Coffey, R J; Othman, J E; Walters, J I

    1995-01-01

    Most health care organizations are using critical pathways in an attempt to reduce the variation in patient care, improve quality, enhance communication, and reduce costs. Virtually all of the critical path efforts to date have developed tables of treatments, medications, and so forth by day and have displayed them in a format known as a Gantt chart. This article presents a methodology for identifying the true "time-limiting" critical path, describes three additional methods for presenting the information--the network, precedent, and resource formats--and shows how these can significantly enhance current critical path efforts.

  17. Examining the Probability of Identification for Gifted Programs for Students in Georgia Elementary Schools: A Multilevel Path Analysis Study

    ERIC Educational Resources Information Center

    McBee, Matthew

    2010-01-01

    This study focused on the analysis of a large-scale data set (N = 326,352) collected by the Georgia Department of Education using multilevel path analysis to model the probability that a student would be identified for participation in a gifted program. The model examined individual- and school-level factors that influence the probability that an…

  18. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  19. Computing tunneling paths with the Hamilton-Jacobi equation and the fast marching method

    NASA Astrophysics Data System (ADS)

    Dey, Bijoy K.; Ayers, Paul W.

    We present a new method for computing the most probable tunneling paths based on the minimum imaginary action principle. Unlike many conventional methods, the paths are calculated without resorting to an optimization (minimization) scheme. Instead, a fast marching method coupled with a back-propagation scheme is used to efficiently compute the tunneling paths. The fast marching method solves a Hamilton-Jacobi equation for the imaginary action on a discrete grid where the action value at an initial point (usually the reactant state configuration) is known in the beginning. Subsequently, a back-propagation scheme uses a steepest descent method on the imaginary action surface to compute a path connecting an arbitrary point on the potential energy surface (usually a state in the product valley) to the initial state. The proposed method is demonstrated for the tunneling paths of two different systems: a model 2D potential surface and the collinear reaction. Unlike existing methods, where the tunneling path is based on a presumed reaction coordinate and a correction is made with respect to the reaction coordinate within an 'adiabatic' approximation, the proposed method is very general and makes no assumptions about the relationship between the reaction coordinate and tunneling path.

  20. Exact transition probabilities in a 6-state Landau–Zener system with path interference

    DOE PAGES

    Sinitsyn, Nikolai A.

    2015-04-23

    In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.

  1. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  2. Methods for fitting a parametric probability distribution to most probable number data.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2012-07-01

    Every year hundreds of thousands, if not millions, of samples are collected and analyzed to assess microbial contamination in food and water. The concentration of pathogenic organisms at the end of the production process is low for most commodities, so a highly sensitive screening test is used to determine whether the organism of interest is present in a sample. In some applications, samples that test positive are subjected to quantitation. The most probable number (MPN) technique is a common method to quantify the level of contamination in a sample because it is able to provide estimates at low concentrations. This technique uses a series of dilution count experiments to derive estimates of the concentration of the microorganism of interest. An application for these data is food-safety risk assessment, where the MPN concentration estimates can be fitted to a parametric distribution to summarize the range of potential exposures to the contaminant. Many different methods (e.g., substitution methods, maximum likelihood and regression on order statistics) have been proposed to fit microbial contamination data to a distribution, but the development of these methods rarely considers how the MPN technique influences the choice of distribution function and fitting method. An often overlooked aspect when applying these methods is whether the data represent actual measurements of the average concentration of microorganism per milliliter or the data are real-valued estimates of the average concentration, as is the case with MPN data. In this study, we propose two methods for fitting MPN data to a probability distribution. The first method uses a maximum likelihood estimator that takes average concentration values as the data inputs. The second is a Bayesian latent variable method that uses the counts of the number of positive tubes at each dilution to estimate the parameters of the contamination distribution. The performance of the two fitting methods is compared for two

  3. Guide waves-based multi-damage identification using a local probability-based diagnostic imaging method

    NASA Astrophysics Data System (ADS)

    Gao, Dongyue; Wu, Zhanjun; Yang, Lei; Zheng, Yuebin

    2016-04-01

    Multi-damage identification is an important and challenging task in the research of guide waves-based structural health monitoring. In this paper, a multi-damage identification method is presented using a guide waves-based local probability-based diagnostic imaging (PDI) method. The method includes a path damage judgment stage, a multi-damage judgment stage and a multi-damage imaging stage. First, damage imaging was performed by partition. The damage imaging regions are divided into beside damage signal paths. The difference in guide waves propagation characteristics between cross and beside damage paths is proposed by theoretical analysis of the guide wave signal feature. The time-of-flight difference of paths is used as a factor to distinguish between cross and beside damage paths. Then, a global PDI method (damage identification using all paths in the sensor network) is performed using the beside damage path network. If the global PDI damage zone crosses the beside damage path, it means that the discrete multi-damage model (such as a group of holes or cracks) has been misjudged as a continuum single-damage model (such as a single hole or crack) by the global PDI method. Subsequently, damage imaging regions are separated by beside damage path and local PDI (damage identification using paths in the damage imaging regions) is performed in each damage imaging region. Finally, multi-damage identification results are obtained by superimposing the local damage imaging results and the marked cross damage paths. The method is employed to inspect the multi-damage in an aluminum plate with a surface-mounted piezoelectric ceramic sensors network. The results show that the guide waves-based multi-damage identification method is capable of visualizing the presence, quantity and location of structural damage.

  4. A probability generating function method for stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Kim, Pilwon; Lee, Chang Hyeong

    2012-06-01

    In this paper we present a probability generating function (PGF) approach for analyzing stochastic reaction networks. The master equation of the network can be converted to a partial differential equation for PGF. Using power series expansion of PGF and Padé approximation, we develop numerical schemes for finding probability distributions as well as first and second moments. We show numerical accuracy of the method by simulating chemical reaction examples such as a binding-unbinding reaction, an enzyme-substrate model, Goldbeter-Koshland ultrasensitive switch model, and G2/M transition model.

  5. Probability Density Function Method for Langevin Equations with Colored Noise

    SciTech Connect

    Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2013-04-05

    We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.

  6. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  7. Analytical Mechanics in Stochastic Dynamics: Most Probable Path, Large-Deviation Rate Function and Hamilton-Jacobi Equation

    NASA Astrophysics Data System (ADS)

    Ge, Hao; Qian, Hong

    2012-09-01

    Analytical (rational) mechanics is the mathematical structure of Newtonian deterministic dynamics developed by D'Alembert, Lagrange, Hamilton, Jacobi, and many other luminaries of applied mathematics. Diffusion as a stochastic process of an overdamped individual particle immersed in a fluid, initiated by Einstein, Smoluchowski, Langevin and Wiener, has no momentum since its path is nowhere differentiable. In this exposition, we illustrate how analytical mechanics arises in stochastic dynamics from a randomly perturbed ordinary differential equation dXt = b(Xt)dt+ɛdWt, where Wt is a Brownian motion. In the limit of vanishingly small ɛ, the solution to the stochastic differential equation other than ˙ {x} = b(x) are all rare events. However, conditioned on an occurrence of such an event, the most probable trajectory of the stochastic motion is the solution to Lagrangian mechanics with L = \\Vert ˙ {q}-b(q)\\Vert 2/4 and Hamiltonian equations with H(p, q) = \\dvbr p\\dvbr2+b(q)ṡp. Hamiltonian conservation law implies that the most probable trajectory for a "rare" event has a uniform "excess kinetic energy" along its path. Rare events can also be characterized by the principle of large deviations which expresses the probability density function for Xt as f(x, t) = e-u(x, t)/ɛ, where u(x, t) is called a large-deviation rate function which satisfies the corresponding Hamilton-Jacobi equation. An irreversible diffusion process with ∇×b≠0 corresponds to a Newtonian system with a Lorentz force ḋ {q} = (∇ × b)× ˙ {q}+({1}/{2})∇ \\Vert b\\Vert 2. The connection between stochastic motion and analytical mechanics can be explored in terms of various techniques of applied mathematics, for example, singular perturbations, viscosity solutions and integrable systems.

  8. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that

  9. The "weighted ensemble" path sampling method is statistically exact for a broad class of stochastic processes and binning procedures.

    PubMed

    Zhang, Bin W; Jasnow, David; Zuckerman, Daniel M

    2010-02-01

    The "weighted ensemble" method, introduced by Huber and Kim [Biophys. J. 70, 97 (1996)], is one of a handful of rigorous approaches to path sampling of rare events. Expanding earlier discussions, we show that the technique is statistically exact for a wide class of Markovian and non-Markovian dynamics. The derivation is based on standard path-integral (path probability) ideas, but recasts the weighted-ensemble approach as simple "resampling" in path space. Similar reasoning indicates that arbitrary nonstatic binning procedures, which merely guide the resampling process, are also valid. Numerical examples confirm the claims, including the use of bins which can adaptively find the target state in a simple model.

  10. The method of modular characteristic direction probabilities in MPACT

    SciTech Connect

    Liu, Z.; Kochunas, B.; Collins, B.; Downar, T.; Wu, H.

    2013-07-01

    The method of characteristic direction probabilities (CDP) is based on a modular ray tracing technique which combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC). This past year CDP was implemented in the transport code MPACT for 2-D and 3-D transport calculations. By only coupling the fine mesh regions passed by the characteristic rays in the particular direction, the scale of the probabilities matrix is much smaller compared to the CPM. At the same time, the CDP has the same capacity of dealing with the complicated geometries with the MOC, because the same modular ray tracing techniques are used. Results from the C5G7 benchmark problems are given for different cases to show the accuracy and efficiency of the CDP compared to MOC. For the cases examined, the CDP and MOC methods were seen to differ in k{sub eff} by about 1-20 pcm, and the computational efficiency of the CDP appears to be better than the MOC for some problems. However, in other problems, particularly when the CDP matrices have to be recomputed from changing cross sections, the CDP does not perform as well. This indicates an area of future work. (authors)

  11. Goldstein-Kac telegraph processes with random speeds: Path probabilities, likelihoods, and reported Lévy flights.

    PubMed

    Sim, Aaron; Liepe, Juliane; Stumpf, Michael P H

    2015-04-01

    The Goldstein-Kac telegraph process describes the one-dimensional motion of particles with constant speed undergoing random changes in direction. Despite its resemblance to numerous real-world phenomena, the singular nature of the resultant spatial distribution of each particle precludes the possibility of any a posteriori empirical validation of this random-walk model from data. Here we show that by simply allowing for random speeds, the ballistic terms are regularized and that the diffusion component can be well-approximated via the unscented transform. The result is a computationally efficient yet robust evaluation of the full particle path probabilities and, hence, the parameter likelihoods of this generalized telegraph process. We demonstrate how a population diffusing under such a model can lead to non-Gaussian asymptotic spatial distributions, thereby mimicking the behavior of an ensemble of Lévy walkers. PMID:25974447

  12. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  13. Probability-theoretical analog of the vector Lyapunov function method

    SciTech Connect

    Nakonechnyi, A.N.

    1995-01-01

    The main ideas of the vector Lyapunov function (VLF) method were advanced in 1962 by Bellman and Matrosov. In this method, a Lyapunov function and a comparison equation are constructed for each subsystem. Then the dependences between the subsystems and the effect of external noise are allowed for by constructing a vector Lyapunov function (as a collection of the scalar Lyapunov functions of the subsystems) and an aggregate comparison function for the entire complex system. A probability-theoretical analog of this method for convergence analysis of stochastic approximation processes has been developed. The abstract approach proposed elsewhere eliminates all restrictions on the system phase space, the system trajectories, the class of Lyapunov functions, etc. The analysis focuses only on the conditions that relate sequences of Lyapunov function values with the derivative and ensure a particular type (mode, character) of stability. In our article, we extend this approach to the VLF method for discrete stochastic dynamic systems.

  14. The quantum bouncer by the path integral method

    NASA Astrophysics Data System (ADS)

    Goodings, D. A.; Szeredi, T.

    1991-10-01

    The path integral formulation of quantum mechanics in the semiclassical or WKB approximation provides a physically intuitive way of relating a classical system to its quantum analog. A fruitful way of studying quantum chaos is based upon applying the Gutzwiller periodic orbit sum rule, a result derived by the path integral method in the WKB approximation. This provides some motivation for learning about path integral techniques. In this paper a pedagogical example of the path integral formalism is presented in the hope of conveying the basic physical and mathematical concepts. The ``quantum bouncer'' is studied—the quantum version of a particle moving in one dimension above a perfectly reflecting surface while subject to a constant force directed toward the surface. The classical counterpart of this system is a ball bouncing on a floor in a constant gravitational field, collisions with the floor being assumed to be elastic. Path integration is used to derive the energy eigenvalues and eigenfunctions of the quantum bouncer in the WKB or semiclassical approximation. The results are shown to be the same as those obtained by solving the Schrödinger equation in the same approximation.

  15. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  16. Path finding methods accounting for stoichiometry in metabolic networks.

    PubMed

    Pey, Jon; Prada, Joaquín; Beasley, John E; Planes, Francisco J

    2011-01-01

    Graph-based methods have been widely used for the analysis of biological networks. Their application to metabolic networks has been much discussed, in particular noting that an important weakness in such methods is that reaction stoichiometry is neglected. In this study, we show that reaction stoichiometry can be incorporated into path-finding approaches via mixed-integer linear programming. This major advance at the modeling level results in improved prediction of topological and functional properties in metabolic networks. PMID:21619601

  17. Path finding methods accounting for stoichiometry in metabolic networks

    PubMed Central

    2011-01-01

    Graph-based methods have been widely used for the analysis of biological networks. Their application to metabolic networks has been much discussed, in particular noting that an important weakness in such methods is that reaction stoichiometry is neglected. In this study, we show that reaction stoichiometry can be incorporated into path-finding approaches via mixed-integer linear programming. This major advance at the modeling level results in improved prediction of topological and functional properties in metabolic networks. PMID:21619601

  18. Accurate photometric redshift probability density estimation - method comparison and application

    NASA Astrophysics Data System (ADS)

    Rau, Markus Michael; Seitz, Stella; Brimioulle, Fabrice; Frank, Eibe; Friedrich, Oliver; Gruen, Daniel; Hoyle, Ben

    2015-10-01

    We introduce an ordinal classification algorithm for photometric redshift estimation, which significantly improves the reconstruction of photometric redshift probability density functions (PDFs) for individual galaxies and galaxy samples. As a use case we apply our method to CFHTLS galaxies. The ordinal classification algorithm treats distinct redshift bins as ordered values, which improves the quality of photometric redshift PDFs, compared with non-ordinal classification architectures. We also propose a new single value point estimate of the galaxy redshift, which can be used to estimate the full redshift PDF of a galaxy sample. This method is competitive in terms of accuracy with contemporary algorithms, which stack the full redshift PDFs of all galaxies in the sample, but requires orders of magnitude less storage space. The methods described in this paper greatly improve the log-likelihood of individual object redshift PDFs, when compared with a popular neural network code (ANNZ). In our use case, this improvement reaches 50 per cent for high-redshift objects (z ≥ 0.75). We show that using these more accurate photometric redshift PDFs will lead to a reduction in the systematic biases by up to a factor of 4, when compared with less accurate PDFs obtained from commonly used methods. The cosmological analyses we examine and find improvement upon are the following: gravitational lensing cluster mass estimates, modelling of angular correlation functions and modelling of cosmic shear correlation functions.

  19. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  20. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  1. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  2. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  3. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  4. On path-following methods for structural failure problems

    NASA Astrophysics Data System (ADS)

    Stanić, Andjelka; Brank, Boštjan; Korelc, Jože

    2016-08-01

    We revisit the consistently linearized path-following method that can be applied in the nonlinear finite element analysis of solids and structures in order to compute a solution path. Within this framework, two constraint equations are considered: a quadratic one (that includes as special cases popular spherical and cylindrical forms of constraint equation), and another one that constrains only one degree-of-freedom (DOF), the critical DOF. In both cases, the constrained DOFs may vary from one solution increment to another. The former constraint equation is successful in analysing geometrically nonlinear and/or standard inelastic problems with snap-throughs, snap-backs and bifurcation points. However, it cannot handle problems with the material softening that are computed e.g. by the embedded-discontinuity finite elements. This kind of problems can be solved by using the latter constraint equation. The plusses and minuses of the both presented constraint equations are discussed and illustrated on a set of numerical examples. Some of the examples also include direct computation of critical points and branch switching. The direct computation of the critical points is performed in the framework of the path-following method by using yet another constraint function, which is eigenvector-free and suited to detect critical points.

  5. A Quantitative Method for Estimating Probable Public Costs of Hurricanes.

    PubMed

    BOSWELL; DEYLE; SMITH; BAKER

    1999-04-01

    / A method is presented for estimating probable public costs resulting from damage caused by hurricanes, measured as local government expenditures approved for reimbursement under the Stafford Act Section 406 Public Assistance Program. The method employs a multivariate model developed through multiple regression analysis of an array of independent variables that measure meteorological, socioeconomic, and physical conditions related to the landfall of hurricanes within a local government jurisdiction. From the regression analysis we chose a log-log (base 10) model that explains 74% of the variance in the expenditure data using population and wind speed as predictors. We illustrate application of the method for a local jurisdiction-Lee County, Florida, USA. The results show that potential public costs range from $4.7 million for a category 1 hurricane with winds of 137 kilometers per hour (85 miles per hour) to $130 million for a category 5 hurricane with winds of 265 kilometers per hour (165 miles per hour). Based on these figures, we estimate expected annual public costs of $2.3 million. These cost estimates: (1) provide useful guidance for anticipating the magnitude of the federal, state, and local expenditures that would be required for the array of possible hurricanes that could affect that jurisdiction; (2) allow policy makers to assess the implications of alternative federal and state policies for providing public assistance to jurisdictions that experience hurricane damage; and (3) provide information needed to develop a contingency fund or other financial mechanism for assuring that the community has sufficient funds available to meet its obligations. KEY WORDS: Hurricane; Public costs; Local government; Disaster recovery; Disaster response; Florida; Stafford Act

  6. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-08-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  7. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  8. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  9. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  10. The New Method to forecast the Path of Typhoon

    NASA Astrophysics Data System (ADS)

    Xingcai, Zhang; Remote Sensing Center

    It is an important operation to forecast the path of typhoon It well know that the leading function of the average air stream of environment field is the primarily mechanism But in practical the movement path of typhoon is much complexity Many factors and the physics processions have not been recognized The energy of typhoon are coming from the coagulate latent heat of water vapor The theory of CISK indicated that the coagulate latent heat of water vapor acts a critical function on the formation typhoon But there are no such researches about the coagulation heat of water vapor latent heat act the movement of typhoon The paper utility unconventionality data NCEP temperature data to diagnose the relationship between the abnormal temperature increase and movement orient in the future and draw some thinks and the technique methods to forecast the movement orient of typhoon 1 Data processing and think The paper uses the real-time data of NCEP to diagnose the move path of typhoon Regarding the environment field before typhoon formation as the basic normal field the typhoon as the disturb resource Going with the movement of typhoon disturb the site the time the orientation environment field would change or change ahead Basing on the mechanism of the CISK and account for the typhoon is the warm center cyclone convection activity in middle and super stratum there are plenty of vapor coagulate and release lots of heat form the warm core we choose the temperature field of 300-200 hPa as the research object use the new time temperature data subtracts

  11. On the orthogonalised reverse path method for nonlinear system identification

    NASA Astrophysics Data System (ADS)

    Muhamad, P.; Sims, N. D.; Worden, K.

    2012-09-01

    The problem of obtaining the underlying linear dynamic compliance matrix in the presence of nonlinearities in a general multi-degree-of-freedom (MDOF) system can be solved using the conditioned reverse path (CRP) method introduced by Richards and Singh (1998 Journal of Sound and Vibration, 213(4): pp. 673-708). The CRP method also provides a means of identifying the coefficients of any nonlinear terms which can be specified a priori in the candidate equations of motion. Although the CRP has proved extremely useful in the context of nonlinear system identification, it has a number of small issues associated with it. One of these issues is the fact that the nonlinear coefficients are actually returned in the form of spectra which need to be averaged over frequency in order to generate parameter estimates. The parameter spectra are typically polluted by artefacts from the identification of the underlying linear system which manifest themselves at the resonance and anti-resonance frequencies. A further problem is associated with the fact that the parameter estimates are extracted in a recursive fashion which leads to an accumulation of errors. The first minor objective of this paper is to suggest ways to alleviate these problems without major modification to the algorithm. The results are demonstrated on numerically-simulated responses from MDOF systems. In the second part of the paper, a more radical suggestion is made, to replace the conditioned spectral analysis (which is the basis of the CRP method) with an alternative time domain decorrelation method. The suggested approach - the orthogonalised reverse path (ORP) method - is illustrated here using data from simulated single-degree-of-freedom (SDOF) and MDOF systems.

  12. Transition-Path Probability as a Test of Reaction-Coordinate Quality Reveals DNA Hairpin Folding Is a One-Dimensional Diffusive Process.

    PubMed

    Neupane, Krishna; Manuel, Ajay P; Lambert, John; Woodside, Michael T

    2015-03-19

    Chemical reactions are typically described in terms of progress along a reaction coordinate. However, the quality of reaction coordinates for describing reaction dynamics is seldom tested experimentally. We applied a framework for gauging reaction-coordinate quality based on transition-path analysis to experimental data for the first time, looking at folding trajectories of single DNA hairpin molecules measured under tension applied by optical tweezers. The conditional probability for being on a reactive transition path was compared with the probability expected for ideal diffusion over a 1D energy landscape based on the committor function. Analyzing measurements and simulations of hairpin folding where end-to-end extension is the reaction coordinate, after accounting for instrumental effects on the analysis, we found good agreement between transition-path and committor analyses for model two-state hairpins, demonstrating that folding is well-described by 1D diffusion. This work establishes transition-path analysis as a powerful new tool for testing experimental reaction-coordinate quality.

  13. Path Sampling Methods for Enzymatic Quantum Particle Transfer Reactions.

    PubMed

    Dzierlenga, M W; Varga, M J; Schwartz, S D

    2016-01-01

    The mechanisms of enzymatic reactions are studied via a host of computational techniques. While previous methods have been used successfully, many fail to incorporate the full dynamical properties of enzymatic systems. This can lead to misleading results in cases where enzyme motion plays a significant role in the reaction coordinate, which is especially relevant in particle transfer reactions where nuclear tunneling may occur. In this chapter, we outline previous methods, as well as discuss newly developed dynamical methods to interrogate mechanisms of enzymatic particle transfer reactions. These new methods allow for the calculation of free energy barriers and kinetic isotope effects (KIEs) with the incorporation of quantum effects through centroid molecular dynamics (CMD) and the full complement of enzyme dynamics through transition path sampling (TPS). Recent work, summarized in this chapter, applied the method for calculation of free energy barriers to reaction in lactate dehydrogenase (LDH) and yeast alcohol dehydrogenase (YADH). We found that tunneling plays an insignificant role in YADH but plays a more significant role in LDH, though not dominant over classical transfer. Additionally, we summarize the application of a TPS algorithm for the calculation of reaction rates in tandem with CMD to calculate the primary H/D KIE of YADH from first principles. We found that the computationally obtained KIE is within the margin of error of experimentally determined KIEs and corresponds to the KIE of particle transfer in the enzyme. These methods provide new ways to investigate enzyme mechanism with the inclusion of protein and quantum dynamics.

  14. Path Sampling Methods for Enzymatic Quantum Particle Transfer Reactions.

    PubMed

    Dzierlenga, M W; Varga, M J; Schwartz, S D

    2016-01-01

    The mechanisms of enzymatic reactions are studied via a host of computational techniques. While previous methods have been used successfully, many fail to incorporate the full dynamical properties of enzymatic systems. This can lead to misleading results in cases where enzyme motion plays a significant role in the reaction coordinate, which is especially relevant in particle transfer reactions where nuclear tunneling may occur. In this chapter, we outline previous methods, as well as discuss newly developed dynamical methods to interrogate mechanisms of enzymatic particle transfer reactions. These new methods allow for the calculation of free energy barriers and kinetic isotope effects (KIEs) with the incorporation of quantum effects through centroid molecular dynamics (CMD) and the full complement of enzyme dynamics through transition path sampling (TPS). Recent work, summarized in this chapter, applied the method for calculation of free energy barriers to reaction in lactate dehydrogenase (LDH) and yeast alcohol dehydrogenase (YADH). We found that tunneling plays an insignificant role in YADH but plays a more significant role in LDH, though not dominant over classical transfer. Additionally, we summarize the application of a TPS algorithm for the calculation of reaction rates in tandem with CMD to calculate the primary H/D KIE of YADH from first principles. We found that the computationally obtained KIE is within the margin of error of experimentally determined KIEs and corresponds to the KIE of particle transfer in the enzyme. These methods provide new ways to investigate enzyme mechanism with the inclusion of protein and quantum dynamics. PMID:27497161

  15. Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel

    2011-01-01

    Archaeological sites are being compromised or destroyed at a catastrophic rate in most regions of the world. The best solution to this problem is for archaeologists to find and study these sites before they are compromised or destroyed. One way to facilitate the necessary rapid, wide area surveys needed to find these archaeological sites is through the generation of maps of probable archaeological sites from remotely sensed data. We describe an approach for identifying probable locations of archaeological sites over a wide area based on detecting subtle anomalies in vegetative cover through a statistically based analysis of remotely sensed data from multiple sources. We further developed this approach under a recent NASA ROSES Space Archaeology Program project. Under this project we refined and elaborated this statistical analysis to compensate for potential slight miss-registrations between the remote sensing data sources and the archaeological site location data. We also explored data quantization approaches (required by the statistical analysis approach), and we identified a superior data quantization approached based on a unique image segmentation approach. In our presentation we will summarize our refined approach and demonstrate the effectiveness of the overall approach with test data from Santa Catalina Island off the southern California coast. Finally, we discuss our future plans for further improving our approach.

  16. Development of partial failure analysis method in probability risk assessments

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    This paper presents a new approach to evaluate the partial failure effect on current Probability Risk Assessments (PRAs). An integrated methodology of the thermal-hydraulic analysis and fuzzy logic simulation using the Dynamic Master Logic Diagram (DMLD) was developed. The thermal-hydraulic analysis used in this approach is to identify partial operation effect of any PRA system function in a plant model. The DMLD is used to simulate the system performance of the partial failure effect and inspect all minimal cut sets of system functions. This methodology can be applied in the context of a full scope PRA to reduce core damage frequency. An example of this application of the approach is presented. The partial failure data used in the example is from a survey study of partial failure effects from the Nuclear Plant Reliability Data System (NPRDS).

  17. Comparing probability and non-probability sampling methods in Ecstasy research: implications for the internet as a research tool.

    PubMed

    Miller, Peter G; Johnston, Jennifer; Dunn, Matthew; Fry, Craig L; Degenhardt, Louisa

    2010-02-01

    The usage of Ecstasy and related drug (ERD) has increasingly been the focus of epidemiological and other public health-related research. One of the more promising methods is the use of the Internet as a recruitment and survey tool. However, there remain methodological concerns and questions about representativeness. Three samples of ERD users in Melbourne, Australia surveyed in 2004 are compared in terms of a number of key demographic and drug use variables. The Internet, face-to-face, and probability sampling methods appear to access similar but not identical groups of ERD users. Implications and limitations of the study are noted and future research is recommended.

  18. The universal path integral

    NASA Astrophysics Data System (ADS)

    Lloyd, Seth; Dreyer, Olaf

    2016-02-01

    Path integrals calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness. The probabilities for events corresponding to sub-integrals can be calculated using the method of decoherent histories. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures.

  19. The experiment to detect equivalent optical path difference in independent double aperture interference light path based on step scanning method

    NASA Astrophysics Data System (ADS)

    Wang, Chaoyan; Chen, Xin-yang; Zheng, Lixin; Ding, Yuanyuan

    2014-11-01

    Fringe test is the method which can detect the relative optical path difference in optical synthetic aperture telescope array. To get to the interference fringes, the two beams of light in the meeting point must be within the coherence length. Step scanning method is within its coherence length, selecting a specific step, changing one-way's optical path of both by changing position of micro displacement actuator. At the same time, every fringe pattern can be recorded. The process of fringe patterns is from appearing to clear to disappearing. Firstly, a particular pixel is selected. Then, we keep tract of the intensity of every picture in the same position. From the intensity change, the best position of relative optical path difference can be made sure. The best position of relative optical path difference is also the position of the clearest fringe. The wavelength of the infrared source is 1290nm and the bandwidth is 63.6nm. In this experiment, the coherence length of infrared source is detected by cube reflection experiment. The coherence length is 30μm by data collection and data processing, and that result of 30μm is less different from the 26μm of theoretical calculated. In order to further test the relative optical path of optical synthetic aperture using step scanning method, the infrared source is placed into optical route of optical synthesis aperture telescope double aperture. The precision position of actuator can be obtained when the fringe is the clearest. By the experiment, we found that the actuating step affects the degree of precision of equivalent optical path. The smaller step size, the more accurate position. But the smaller the step length, means that more steps within the coherence length measurement and the longer time.

  20. [Measurement of path transverse wind velocity profile using light forward scattering scintillation correlation method].

    PubMed

    Yuan, Ke-E; Lü, Wei-Yu; Zheng, Li-Nan; Hu, Shun-Xing; Huang, Jian; Cao, Kai-Fa; Xu, Zhi-Hai

    2014-07-01

    A new method for path transverse wind velocity survey was introduced by analyzing time lagged covariance function of different separation sub-apertures of Hartmann wavefront sensor. A theoretical formula was logically deduced for the light propagation path transverse wind velocity profile. According to the difference of path weighting function for different sub apertures spacing, how to select reasonable path weighting functions was analyzed. Using a Hartmann wavefront sensor, the experiment for measuring path transverse velocity profile along 1 000 m horizontal propagating path was carried out for the first time to our knowledge. The experiment results were as follows. Path transverse averaged velocity from sensor had a good consistency with transverse velocity from the wind anemometer sited near the path receiving end. As the path was divided into two sections, the path transverse velocity of the first section had also a good consistency with that of the second one. Because of different specific underlaying surface of light path, the former was greater than the later over all experiment period. The averaged values were 1.273 and 0.952 m x s(-1) respectively. The path transverse velocity of second section and path transverse averaged velocity had the same trend of decrease and increase with time. The correlation coefficients reached 0.86.

  1. Analyzing methods for path mining with applications in metabolomics.

    PubMed

    Tagore, Somnath; Chowdhury, Nirmalya; De, Rajat K

    2014-01-25

    Metabolomics is one of the key approaches of systems biology that consists of studying biochemical networks having a set of metabolites, enzymes, reactions and their interactions. As biological networks are very complex in nature, proper techniques and models need to be chosen for their better understanding and interpretation. One of the useful strategies in this regard is using path mining strategies and graph-theoretical approaches that help in building hypothetical models and perform quantitative analysis. Furthermore, they also contribute to analyzing topological parameters in metabolome networks. Path mining techniques can be based on grammars, keys, patterns and indexing. Moreover, they can also be used for modeling metabolome networks, finding structural similarities between metabolites, in-silico metabolic engineering, shortest path estimation and for various graph-based analysis. In this manuscript, we have highlighted some core and applied areas of path-mining for modeling and analysis of metabolic networks. PMID:24230973

  2. Nonlinear multi-agent path search method based on OFDM communication

    NASA Astrophysics Data System (ADS)

    Sato, Masatoshi; Igarashi, Yusuke; Tanaka, Mamoru

    This paper presents novel shortest paths searching system based on analog circuit analysis which is called sequential local current comparison method on alternating-current (AC) circuit (AC-SLCC). Local current comparison (LCC) method is a path searching method where path is selected in the direction of the maximum current in a direct-current (DC) resistive circuit. Since a plurality of shortest paths searching by LCC method can be done by solving the current distribution on the resistive circuit analysis, the shortest path problem can be solved at supersonic speed. AC-SLCC method is a novel LCC method with orthogonal frequency division multiplexing (OFDM) communication on AC circuit. It is able to send data with the shortest path and without major data loss, and this suggest the possibility of application to various things (especially OFDM communication techniques).

  3. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  4. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  5. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-01

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  6. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    PubMed Central

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  7. Secondary Path Modeling Method for Active Noise Control of Power Transformer

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Liang, Jiabi; Liang, Yuanbin; Wang, Lixin; Pei, Xiugao; Li, Peng

    The accuracy of the secondary path modeling is critical to the stability of active noise control system. On condition of knowing the input and output of the secondary path, system identification theory can be used to identify the path. Based on the experiment data, correlation analysis is adopted to eliminate the random noise and nonlinear harmonic in the output data in order to obtain the accurate frequency characteristic of the secondary path. After that, Levy's Method is applied to identify the transfer function of the path. Computer simulation results are given respectively, both showing the proposed off-line modeling method is feasible and applicable. At last, Levy's Method is used to attain an accurate secondary path model in the active control of transformer noise experiment and achieves to make the noise sound level decrease about 10dB.

  8. Path segmentation for beginners: an overview of current methods for detecting changes in animal movement patterns.

    PubMed

    Edelhoff, Hendrik; Signer, Johannes; Balkenhol, Niko

    2016-01-01

    Increased availability of high-resolution movement data has led to the development of numerous methods for studying changes in animal movement behavior. Path segmentation methods provide basics for detecting movement changes and the behavioral mechanisms driving them. However, available path segmentation methods differ vastly with respect to underlying statistical assumptions and output produced. Consequently, it is currently difficult for researchers new to path segmentation to gain an overview of the different methods, and choose one that is appropriate for their data and research questions. Here, we provide an overview of different methods for segmenting movement paths according to potential changes in underlying behavior. To structure our overview, we outline three broad types of research questions that are commonly addressed through path segmentation: 1) the quantitative description of movement patterns, 2) the detection of significant change-points, and 3) the identification of underlying processes or 'hidden states'. We discuss advantages and limitations of different approaches for addressing these research questions using path-level movement data, and present general guidelines for choosing methods based on data characteristics and questions. Our overview illustrates the large diversity of available path segmentation approaches, highlights the need for studies that compare the utility of different methods, and identifies opportunities for future developments in path-level data analysis.

  9. Path segmentation for beginners: an overview of current methods for detecting changes in animal movement patterns.

    PubMed

    Edelhoff, Hendrik; Signer, Johannes; Balkenhol, Niko

    2016-01-01

    Increased availability of high-resolution movement data has led to the development of numerous methods for studying changes in animal movement behavior. Path segmentation methods provide basics for detecting movement changes and the behavioral mechanisms driving them. However, available path segmentation methods differ vastly with respect to underlying statistical assumptions and output produced. Consequently, it is currently difficult for researchers new to path segmentation to gain an overview of the different methods, and choose one that is appropriate for their data and research questions. Here, we provide an overview of different methods for segmenting movement paths according to potential changes in underlying behavior. To structure our overview, we outline three broad types of research questions that are commonly addressed through path segmentation: 1) the quantitative description of movement patterns, 2) the detection of significant change-points, and 3) the identification of underlying processes or 'hidden states'. We discuss advantages and limitations of different approaches for addressing these research questions using path-level movement data, and present general guidelines for choosing methods based on data characteristics and questions. Our overview illustrates the large diversity of available path segmentation approaches, highlights the need for studies that compare the utility of different methods, and identifies opportunities for future developments in path-level data analysis. PMID:27595001

  10. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  11. An efficient surrogate-based method for computing rare failure probability

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Jinglai; Xiu, Dongbin

    2011-10-01

    In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.

  12. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  13. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  14. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  15. A kinetic model for voltage-gated ion channels in cell membranes based on the path integral method

    NASA Astrophysics Data System (ADS)

    Erdem, Rıza; Ekiz, Cesur

    2005-04-01

    A kinetic model of cell membrane ion channels is proposed based on the path integral method. From the Pauli-type master equations valid on a macroscopic time scale, we derive a first-order differential equation or the kinetic equation which governs temporal evolution of the channel system along the paths of extreme probability. Using known parameters for the batrachotoxin (BTX)-modified sodium channels in squid giant axon, the time dependence of the channel activation and the voltage dependence of the corresponding time constants ( τ) are examined numerically. It is found that the channel activation relaxes to the steady (or equilibrium)-state values for a given membrane potential and the corresponding time constant reaches a maximum at a certain potential and thereafter decreases in magnitude as the membrane potential increases. A qualitative comparison between these results and the results of Hodgkin-Huxley theory, path probability method and thermodynamic models as well as the cut-open axon technique is presented. Good agreement is achieved.

  16. Environment induced time arrow and the Closed Time Path method

    NASA Astrophysics Data System (ADS)

    Polonyi, Janos

    2013-06-01

    It is shown in the framework of a harmonic system that the thermodynamical time arrow is induced by the environmental initial conditions in a manner similar to spontaneous symmetry breaking. The Closed Time Path formalism is introduced in classical mechanics to handle Green functions for initial condition problems by the action principle, in a systematic manner. The application of this scheme for quantum systems shows the common dynamical origin of the thermodynamical and the quantum time arrows. It is furthermore conjectured that the quantum-classical transition is strongly coupled.

  17. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  18. A new method to calculate reaction paths for conformation transitions of large molecules

    NASA Astrophysics Data System (ADS)

    Smart, Oliver S.

    1994-05-01

    Path energy minimization (PEM), a novel method for the generation of a reaction path linking two known conformers of a molecule, is presented. The technique is based on optimizing a function which closely approximates the peak potential energy of a quasi-continuous path between the fixed end points. A transition involving the change in the pucker angle of α-D-xylulofuranose is used as a test case. The method is shown to, be capable of identifying transition state structures and energy barries. The utility of the method is demonstrated by an application to substantial conformational transition of the ion-channel forming polypeptide gramicidin A.

  19. Method of fabricating an abradable gas path seal

    NASA Technical Reports Server (NTRS)

    Bill, R. C.; Wisander, D. W. (Inventor)

    1984-01-01

    The thermal shock resistance of a ceramic layer is improved. The invention is particularly directed to an improved abradable lining that is deposited on shroud forming a gas path in turbomachinery. Improved thermal shock resistance of a shroud is effected through the deliberate introduction of benign cracks. These are microcracks which will not propagate appreciably upon exposure to the thermal shock environment in which a turbine seal must function. Laser surface fusion treatment is used to introduce these microcracks. The ceramic surface is laser scanned to form a continuous dense layer. As this layer cools and solidifies, shrinkage results in the formation of a very fine crack network. The presence of this deliberately introduced fine crack network precludes the formation of a catastrophic crack during thermal shock exposure.

  20. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of

  1. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  2. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).

  3. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  4. Computer code long path method for long path differential-absorption predictions using CO{sub 2} laser lines

    SciTech Connect

    Zuev, V.V.; Mitsel`, A.A.; Kataev, M.Y.; Ptashnik, I.V.; Firsov, K.M.

    1995-11-01

    A computer program LPM (Long Path Method) has been developed for imitative modeling of the concentration at gases (H{sub 2}O, CO{sub 2}, O{sub 3}, NH{sub 3}, C{sub 2}H{sub 4}) in the atmosphere using a long-path double-wavelength laser system equipped with two tunable CO{sub 2} lasers. The model is designed for four different lasing isotopes of CO{sub 2} ({sup 12}C{sup 16}O{sub 2}, {sup 13}C{sup 16}O{sub 2}, {sup 12}C{sup 18}O{sub 2}, {sup 13}C{sup 18}O{sub 2}). The program determines optimal pairs of CO{sub 2} laser wavelengths, and the gas concentration retrieval errors from sounding data caused both by detector noise and systematic inaccuracy. The program was written in MS FORTRAN and Visual Basic languages for Windows 3.1 and an IBM-compatible PC. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  5. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables.

    PubMed

    Brus, D J; de Gruijter, J J

    2003-04-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be increased by interpolating the values at the nonprobability sample points to the probability sample points, and using these interpolated values as an auxiliary variable in the difference or regression estimator. These estimators are (approximately) unbiased, even when the nonprobability sample is severely biased such as in preferential samples. The gain in precision compared to the pi estimator in combination with Simple Random Sampling is controlled by the correlation between the target variable and interpolated variable. This correlation is determined by the size (density) and spatial coverage of the nonprobability sample, and the spatial continuity of the target variable. In a case study the average ratio of the variances of the simple regression estimator and pi estimator was 0.68 for preferential samples of size 150 with moderate spatial clustering, and 0.80 for preferential samples of similar size with strong spatial clustering. In the latter case the simple regression estimator was substantially more precise than the simple difference estimator.

  6. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems.

    PubMed

    Branduardi, Davide; Faraldo-Gómez, José D

    2013-09-10

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string. PMID

  7. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems.

    PubMed

    Branduardi, Davide; Faraldo-Gómez, José D

    2013-09-10

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string.

  8. Estimating Super Heavy Element Event Random Probabilities Using Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Stoyer, Mark; Henderson, Roger; Kenneally, Jacqueline; Moody, Kenton; Nelson, Sarah; Shaughnessy, Dawn; Wilk, Philip

    2009-10-01

    Because superheavy element (SHE) experiments involve very low event rates and low statistics, estimating the probability that a given event sequence is due to random events is extremely important in judging the validity of the data. A Monte Carlo method developed at LLNL [1] is used on recent SHE experimental data to calculate random event probabilities. Current SHE experimental activities in collaboration with scientists at Dubna, Russia will be discussed. [4pt] [1] N.J. Stoyer, et al., Nucl. Instrum. Methods Phys. Res. A 455 (2000) 433.

  9. Earthquake-Induced Landslide Probability Derived From Four Different Methods and Result Comparison

    NASA Astrophysics Data System (ADS)

    Lee, C.

    2005-12-01

    This study analyzed landslides induced by the 1999 Chi-Chi, Taiwan earthquake at a test site in Central Taiwan, called Kuohsing, and landslide spatial probability maps for the test site were made. Landslides induced by the earthquake were extracted from SPOT imageries, Landslide potential factors, which include slope, slope aspect, terrain roughness, total curvature and slope height were derived from a 40m resolution DEM. Lithology and structural data were obtained from a 1 to 50 thousand scaled geological map. Earthquake strong-motion data were used to calculate Arias intensity and others. The state-of-the-art methods, which include two multivariate approach - discriminant analysis and logistic regression, an artificial neural network approach, and the Newmark's method, were used in the analyses. In the discriminant analysis, the output discriminant scores are used to develop landslide susceptibility index (LSI). In the logistic regression, an output probability is used as a LSI directly. In the artificial neural network approach, a fuzzy set concept for landslide and non-landslide was incorporated into the analysis so that the network can output a continuous spectrum for landslide and non-landslide membership, and a defuzzifier was used to obtain a nonfuzzy value for LSI. In the Newmark's method, the output value is a Newmark displacement (Dn). All LSIs and Dns are compared with the landslide inventory and then calculate the landslide ratio or probability of failure for each LSI or Dn interval. These were used to develop the probability of failure functions against LSIs or Dn. Landslide probability maps were then drawn by using the probability of failure functions. All the four methods obtain good result in predicting landslides. Four landslide probability maps show similar probability level and distribution pattern. Among the four methods, discriminant analysis and logistic regression are both stable and good in predicting landslides. The artificial neural

  10. A method of classification for multisource data in remote sensing based on interval-valued probabilities

    NASA Technical Reports Server (NTRS)

    Kim, Hakil; Swain, Philip H.

    1990-01-01

    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.

  11. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  12. Application of Simultaneous Equations Method to ANC System with Non-minimum Phase Secondary Path

    NASA Astrophysics Data System (ADS)

    Fujii, Kensaku; Kashihara, Kenji; Wakabayashi, Isao; Muneyasu, Mitsuji; Morimoto, Masakazu

    In this paper, we propose a method capable of shortening the distance from a noise detection microphone to a loudspeaker in active noise control system with non-minimum phase secondary path. The distance can be basically shortened by forming the noise control filter, which produces the secondary noise provided by the loudspeaker, with the cascade connection of a non-recursive filter and a recursive filter. The output of the recursive filter, however, diverges even when the secondary path includes only a minimum phase component. In this paper, we prevent the divergence by utilizing MINT (multi-input/output inverse theorem) method increasing the number of secondary paths than that of primary paths. MINT method, however, requires a large scale inverse matrix operation, which increases the processing cost. We hence propose a method reducing the processing cost. Actually, MINT method has only to be applied to the non-minimum phase components of the secondary paths. We hence extract the non-minimum phase components and then apply MINT method only to those. The order of the inverse matrix thereby decreases and the processing cost can be reduced. We finally show a simulation result demonstrating that the proposed method successfully works.

  13. Free vibration characteristics of multiple load path blades by the transfer matrix method

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.; Joshi, Arun M.

    1986-01-01

    The determination of free vibrational characteristics is basic to any dynamic design, and these characteristics can form the basis for aeroelastic stability analyses. Conventional helicopter blades are typically idealized as single-load-path blades, and the transfer matrix method is well suited to analyze such blades. Several current helicopter dynamic programs employ transfer matrices to analyze the rotor blades. In this paper, however, the transfer matrix method is extended to treat multiple-load-path blades, without resorting to an equivalent single-load-path approximation. With such an extension, these current rotor dynamic programs which employ the transfer matrix method can be modified with relative ease to account for the multiple load paths. Unlike the conventional blades, the multiple-load-path blades require the introduction of the axial degree-of-freedom into the solution process to account for the differential axial displacements of the different load paths. The transfer matrix formulation is validated through comparison with the finite-element solutions.

  14. A modified toxicity probability interval method for dose-finding trials

    PubMed Central

    Ji, Yuan; Liu, Ping; Li, Yisheng; Bekele, B Nebiyou

    2016-01-01

    Background Building on earlier work, the toxicity probability interval (TPI) method, we present a modified TPI (mTPI) design that is calibration-free for phase I trials. Purpose Our goal is to improve the trial conduct and provide more effective designs while maintaining the simplicity of the original TPI design. Methods Like the TPI method, the mTPI consists of a practical dose-finding scheme guided by the posterior inference for a simple Bayesian model. However, the new method proposes improved dose-finding decision rules based on a new statistic, the unit probability mass (UPM). For a given interval and a probability distribution, the UPM is defined as the ratio of the probability mass of the interval to the length of the interval. Results The improvement through the use of the UPM for dose finding is threefold: (1) the mTPI method appears to be safer than the TPI method in that it puts fewer patients on toxic doses; (2) the mTPI method eliminates the need for calibrating two key parameters, which is required in the TPI method and is a known difficult issue; and (3) the mTPI method corresponds to the Bayes rule under a decision theoretic framework and possesses additional desirable large- and small-sample properties. Limitation The proposed method is applicable to dose-finding trials with a binary toxicity endpoint. Conclusion The new method mTPI is essentially calibration free and exhibits improved performance over the TPI method. These features make the mTPI a desirable choice for the design of practical trials. PMID:20935021

  15. Selective flow path alpha particle detector and method of use

    DOEpatents

    Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore

    2002-01-01

    A method and apparatus for monitoring alpha contamination are provided in which ions generated in the air surrounding the item, by the passage of alpha particles, are moved to a distant detector location. The parts of the item from which ions are withdrawn can be controlled by restricting the air flow over different portions of the apparatus. In this way, detection of internal and external surfaces separately, for instance, can be provided. The apparatus and method are particularly suited for use in undertaking alpha contamination measurements during the commissioning operations.

  16. Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.

    2016-09-01

    The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.

  17. A simple method for afterpulse probability measurement in high-speed single-photon detectors

    NASA Astrophysics Data System (ADS)

    Liu, Junliang; Li, Yongfu; Ding, Lei; Zhang, Chunfang; Fang, Jiaxiong

    2016-07-01

    A simple statistical method is proposed for afterpulse probability measurement in high-speed single-photon detectors. The method is based on in-laser-period counting without the support of time-correlated information or delay adjustment, and is readily implemented with commercially available logic devices. We present comparisons among the proposed method and commonly used methods which use the time-correlated single-photon counter or the gated counter, based on a 1.25-GHz gated infrared single-photon detector. Results show that this in-laser-period counting method has similar accuracy to the commonly used methods with extra simplicity, robustness, and faster measuring speed.

  18. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  19. Method and apparatus for monitoring characteristics of a flow path having solid components flowing therethrough

    DOEpatents

    Hoskinson, Reed L.; Svoboda, John M.; Bauer, William F.; Elias, Gracy

    2008-05-06

    A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.

  20. Path planning method for UUV homing and docking in movement disorders environment.

    PubMed

    Yan, Zheping; Deng, Chao; Chi, Dongnan; Chen, Tao; Hou, Shuping

    2014-01-01

    Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path. PMID:25054169

  1. Path Planning Method for UUV Homing and Docking in Movement Disorders Environment

    PubMed Central

    Yan, Zheping; Deng, Chao; Chi, Dongnan; Hou, Shuping

    2014-01-01

    Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path. PMID:25054169

  2. The "Closed School-Cluster" Method of Selecting a Probability Sample.

    ERIC Educational Resources Information Center

    Shaycoft, Marion F.

    In some educational research studies--particularly longitudinal studies requiring a probability sample of schools and spanning a wide range of grades--it is desirable to so select the sample that schools at different levels (e.g., elementary and secondary) "correspond." This has often proved unachievable, using standard methods of selecting school…

  3. Experimental validation of the orthogonalised reverse path method using a nonlinear beam

    NASA Astrophysics Data System (ADS)

    Muhamed, P.; Worden, K.; Sims, N. D.

    2012-08-01

    The Orthogonalised Reverse Path (ORP) method is a new algorithm of the 'reverse path' class but developed in the time-domain. Like the Conditioned Reverse Path (CRP) method, the ORP approach is capable of identifying the underlying linear FRF of a system or structure in the presence of nonlinearities and may well also lead to simplifications in the estimation of coefficients of nonlinear terms. The method has shown itself to be numerically robust not only for simple simulated SDOF systems but also for simulated MDOF systems. The aim of this paper is to discuss an application of the ORP method to an experimental test set-up based on a nonlinear beam rig.

  4. Probability of Detection (POD) as a statistical model for the validation of qualitative methods.

    PubMed

    Wehling, Paul; LaBudde, Robert A; Brunelle, Sharon L; Nelson, Maria T

    2011-01-01

    A statistical model is presented for use in validation of qualitative methods. This model, termed Probability of Detection (POD), harmonizes the statistical concepts and parameters between quantitative and qualitative method validation. POD characterizes method response with respect to concentration as a continuous variable. The POD model provides a tool for graphical representation of response curves for qualitative methods. In addition, the model allows comparisons between candidate and reference methods, and provides calculations of repeatability, reproducibility, and laboratory effects from collaborative study data. Single laboratory study and collaborative study examples are given.

  5. Code System to Calculate Group-Averaged Cross Sections Using the Collision Probability Method.

    1995-05-17

    Version 00 This program calculates group-averaged cross sections for specific zones in a one-dimensional geometry. ROLAIDS-CPM is an extension of ROLAIDS from the PSR-315/AMPX-77 package. The main extension is the capability to use the collision probability method for a slab- or cylinder-geometry rather than the interface-currents method. This new version allows slowing down of neutrons in the energy range where the scattering is elastic and upscattering does not occur. The scattering sources are assumed tomore » be flat and isotropic in the different zones. The extra assumption of cosine currents at the interfaces of the zones (interface currents method) is not necessary for the collision probability method.« less

  6. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  7. A fast tomographic method for searching the minimum free energy path

    SciTech Connect

    Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

    2014-10-21

    Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP.

  8. Measures of activity-based pedestrian exposure to the risk of vehicle-pedestrian collisions: space-time path vs. potential path tree methods.

    PubMed

    Yao, Shenjun; Loo, Becky P Y; Lam, Winnie W Y

    2015-02-01

    Research on the extent to which pedestrians are exposed to road collision risk is important to the improvement of pedestrian safety. As precise geographical information is often difficult and costly to collect, this study proposes a potential path tree method derived from time geography concepts in measuring pedestrian exposure. With negative binomial regression (NBR) and geographically weighted Poisson regression (GWPR) models, the proposed probabilistic two-anchor-point potential path tree (PPT) approach (including the equal and weighted PPT methods) are compared with the deterministic space-time path (STP) method. The results indicate that both STP and PPT methods are useful tools in measuring pedestrian exposure. While the STP method can save much time, the PPT methods outperform the STP method in explaining the underlying vehicle-pedestrian collision pattern. Further research efforts are needed to investigate the influence of walking speed and route choice.

  9. Measures of activity-based pedestrian exposure to the risk of vehicle-pedestrian collisions: space-time path vs. potential path tree methods.

    PubMed

    Yao, Shenjun; Loo, Becky P Y; Lam, Winnie W Y

    2015-02-01

    Research on the extent to which pedestrians are exposed to road collision risk is important to the improvement of pedestrian safety. As precise geographical information is often difficult and costly to collect, this study proposes a potential path tree method derived from time geography concepts in measuring pedestrian exposure. With negative binomial regression (NBR) and geographically weighted Poisson regression (GWPR) models, the proposed probabilistic two-anchor-point potential path tree (PPT) approach (including the equal and weighted PPT methods) are compared with the deterministic space-time path (STP) method. The results indicate that both STP and PPT methods are useful tools in measuring pedestrian exposure. While the STP method can save much time, the PPT methods outperform the STP method in explaining the underlying vehicle-pedestrian collision pattern. Further research efforts are needed to investigate the influence of walking speed and route choice. PMID:25555021

  10. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.

  11. Enumeration of fungi in fruits by the most probable number method.

    PubMed

    Watanabe, Maiko; Tsutsumi, Fumiyuki; Lee, Ken-ichi; Sugita-Konishi, Yoshiko; Kumagai, Susumu; Takatori, Kosuke; Hara-Kudo, Yukiko; Konuma, Hirotaka

    2010-01-01

    In this study, enumeration methods for fungi in foods were evaluated using fruits that are often contaminated by fungi in the field and rot because of fungal contaminants. As the test methods, we used the standard most probable number (MPN) method with liquid medium in test tubes, which is traditionally used as the enumeration method for bacteria, and the plate-MPN method with agar plate media, in addition to the surface plating method as the traditional enumeration method for fungi. We tested 27 samples of 9 commercial domestic fruits using their surface skin. The results indicated that the standard MPN method showed slow recovery of fungi in test tubes and lower counts than the surface plating method and the plate-MPN method in almost all samples. The fungal count on the 4th d of incubation was approximately the same as on the 10th d by the surface plating method or the plate-MPN method, indicating no significant differences between the fungal counts by these 2 methods. This result indicated that the plate-MPN method had a number agreement with the traditional enumeration method. Moreover, the plate-MPN method has a little laborious without counting colonies, because fungal counts are estimated based on the number of plates with growing colonies. These advantages demonstrated that the plate-MPN method is a comparatively superior and rapid method for enumeration of fungi.

  12. Lipid extraction methods from microalgal biomass harvested by two different paths: screening studies toward biodiesel production.

    PubMed

    Ríos, Sergio D; Castañeda, Joandiet; Torras, Carles; Farriol, Xavier; Salvadó, Joan

    2013-04-01

    Microalgae can grow rapidly and capture CO2 from the atmosphere to convert it into complex organic molecules such as lipids (biodiesel feedstock). High scale economically feasible microalgae based oil depends on optimizing the entire process production. This process can be divided in three very different but directly related steps (production, concentration, lipid extraction and transesterification). The aim of this study is to identify the best method of lipid extraction to undergo the potentiality of some microalgal biomass obtained from two different harvesting paths. The first path used all physicals concentration steps, and the second path was a combination of chemical and physical concentration steps. Three microalgae species were tested: Phaeodactylum tricornutum, Nannochloropsis gaditana, and Chaetoceros calcitrans One step lipid extraction-transesterification reached the same fatty acid methyl ester yield as the Bligh and Dyer and soxhlet extraction with n-hexane methods with the corresponding time, cost and solvent saving. PMID:23434816

  13. 14C-most-probable-number method for enumeration of active heterotrophic microorganisms in natural waters.

    PubMed Central

    Lehmicke, L G; Williams, R T; Crawford, R L

    1979-01-01

    A most-probable-number method using 14C-labeled substrates is described for the enumeration of aquatic populations of heterotrophic microorganisms. Natural populations of microorganisms are inoculated into dilution replicates prepared from the natural water from which the organisms originated. The natural water is supplemented with a 14C-labeled compound added so as to approximate a true environmental concentration. 14CO2 evolved by individual replicates is trapped in NaOH and counted by liquid scintillation techniques for use in scoring replicates as positive or negative. Positives (14CO2 evolution) are easily distinguished from negatives (no 14CO2 evolution). The results from a variety of environments using the 14CO2 procedure agreed well with previously described methods, in most instances. The 14C-most-probable-number method described here reduces handling procedures over previously described most-probable-number procedures using 14C-labeled substrates. It also appears to have advantages over other enumeration methods in its attempt to approximate natural conditions more closely. PMID:120133

  14. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  15. Identification of contaminant point source in surface waters based on backward location probability density function method

    NASA Astrophysics Data System (ADS)

    Cheng, Wei Ping; Jia, Yafei

    2010-04-01

    A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.

  16. A parallel multiple path tracing method based on OptiX for infrared image generation

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu

    2015-12-01

    Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.

  17. Real-time optical path control method that utilizes multiple support vector machines for traffic prediction

    NASA Astrophysics Data System (ADS)

    Kawase, Hiroshi; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    An effective solution to the continuous Internet traffic expansion is to offload traffic to lower layers such as the L2 or L1 optical layers. One possible approach is to introduce dynamic optical path operations such as adaptive establishment/tear down according to traffic variation. Path operations cannot be done instantaneously; hence, traffic prediction is essential. Conventional prediction techniques need optimal parameter values to be determined in advance by averaging long-term variations from the past. However, this does not allow adaptation to the ever-changing short-term variations expected to be common in future networks. In this paper, we propose a real-time optical path control method based on a machinelearning technique involving support vector machines (SVMs). A SVM learns the most recent traffic characteristics, and so enables better adaptation to temporal traffic variations than conventional techniques. The difficulty lies in determining how to minimize the time gap between optical path operation and buffer management at the originating points of those paths. The gap makes the required learning data set enormous and the learning process costly. To resolve the problem, we propose the adoption of multiple SVMs running in parallel, trained with non-overlapping subsets of the original data set. The maximum value of the outputs of these SVMs will be the estimated number of necessary paths. Numerical experiments prove that our proposed method outperforms a conventional prediction method, the autoregressive moving average method with optimal parameter values determined by Akaike's information criterion, and reduces the packet-loss ratio by up to 98%.

  18. Sensitivity analysis of pesticides contaminating groundwater by applying probability and transport methods.

    PubMed

    Zhang, Pengxin; Aagaard, Per; Nadim, Farrokh; Gottschalk, Lars; Haarstad, Ketil

    2009-07-01

    The use of pesticides is a potential threat to local groundwater. Once groundwater is contaminated, it is very difficult to clean. Thus, it is of importance to assess the risk of contaminating local groundwater at an early stage when pesticides are found in soils. This knowledge will also help in remediation strategies. Traditional methods of deterministic analysis cannot explicitly account for the sometimes large uncertainties that exist at this stage in the work, whereas probabilistic analyses are better suited for dealing with these problems. In this paper, we have combined contaminant transport with a 1st-order reliability approach. Pesticide concentrations in soil have been studied to estimate the probability of failure--that is, of pesticides exceeding established critical levels in groundwater. Results indict that failure probability increases rapidly within a certain range of pesticide concentrations in soil for different critical levels. In given aquifer conditions and contaminants, probabilities of contaminants exceeding particular critical levels can easily be obtained according to various water usage scenarios. The distribution of importance factors among variables indicates the contribution their relative weights make to the failure probability. Hence, authorities can easily form sensitivity factors to take action and reduce the risk of contaminating the groundwater.

  19. Probability density based gradient projection method for inverse kinematics of a robotic human body model.

    PubMed

    Lura, Derek; Wernke, Matthew; Alqasemi, Redwan; Carey, Stephanie; Dubey, Rajiv

    2012-01-01

    This paper presents the probability density based gradient projection (GP) of the null space of the Jacobian for a 25 degree of freedom bilateral robotic human body model (RHBM). This method was used to predict the inverse kinematics of the RHBM and maximize the similarity between predicted inverse kinematic poses and recorded data of 10 subjects performing activities of daily living. The density function was created for discrete increments of the workspace. The number of increments in each direction (x, y, and z) was varied from 1 to 20. Performance of the method was evaluated by finding the root mean squared (RMS) of the difference between the predicted joint angles relative to the joint angles recorded from motion capture. The amount of data included in the creation of the probability density function was varied from 1 to 10 subjects, creating sets of for subjects included and excluded from the density function. The performance of the GP method for subjects included and excluded from the density function was evaluated to test the robustness of the method. Accuracy of the GP method varied with amount of incremental division of the workspace, increasing the number of increments decreased the RMS error of the method, with the error of average RMS error of included subjects ranging from 7.7° to 3.7°. However increasing the number of increments also decreased the robustness of the method.

  20. The Most Probable Limit of Detection (MPL) for rapid microbiological methods.

    PubMed

    Verdonk, G P H T; Willemse, M J; Hoefs, S G G; Cremers, G; van den Heuvel, E R

    2010-09-01

    Classical microbiological methods have nowadays unacceptably long cycle times. Rapid methods, available on the market for decades, are already applied within the clinical and food industry, but the implementation in pharmaceutical industry is hampered by for instance stringent regulations on validation and comparability with classical methods. Equivalence studies become less relevant when rapid methods are able to detect only one single microorganism. Directly testing this capability is currently impossible due to problems associated with preparing a spiked sample with low microbial counts. To be able to precisely estimate the limit of detection of rapid absence/presence tests, the method of the most probable limit is presented. It is based on three important elements; a relatively precise quantity of microorganisms, a non-serial dilution experiment and a statistical approach. For a set of microorganisms, a limit of detection of one was demonstrated using two different rapid methods.

  1. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  2. The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen; Leighton, Tom; Miller, Gary L.

    1997-01-01

    We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.

  3. Path ANalysis

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes duringmore » courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.« less

  4. Path ANalysis

    SciTech Connect

    Snell, Mark K.

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes during courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.

  5. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method.

    PubMed

    Alani, Amir M; Faramarzi, Asaad

    2015-06-10

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes.

  6. RATIONAL DETERMINATION METHOD OF PROBABLE FREEZING INDEX FOR n-YEARS CONSIDERING THE REGIONAL CHARACTERISTICS

    NASA Astrophysics Data System (ADS)

    Kawabata, Shinichiro; Hayashi, Keiji; Kameyama, Shuichi

    This paper investigates a method for ob taining the probable freezing index for n -years from past frostaction damage and meteorological data. From investigati on of Japanese cold winter data from the areas of Hokkaido, Tohoku and south of Tohoku, it was found that the extent of cold winter had regularity by location south or north. Also, after obtaining return periods of cold winters by area, obvious regional characteristics were found. Mild winters are rare in Hokkaido. However, it was clarified that when Hokkaido had cold winters, its size increased. It wa s effective to determine the probable freezing indices as 20-, 15- and 10-year return periods for Hokkaido, Tohoku and south of Tohoku, respectively.

  7. Computing light statistics in heterogeneous media based on a mass weighted probability density function method.

    PubMed

    Jenny, Patrick; Mourad, Safer; Stamm, Tobias; Vöge, Markus; Simon, Klaus

    2007-08-01

    Based on the transport theory, we present a modeling approach to light scattering in turbid material. It uses an efficient and general statistical description of the material's scattering and absorption behavior. The model estimates the spatial distribution of intensity and the flow direction of radiation, both of which are required, e.g., for adaptable predictions of the appearance of colors in halftone prints. This is achieved by employing a computational particle method, which solves a model equation for the probability density function of photon positions and propagation directions. In this framework, each computational particle represents a finite probability of finding a photon in a corresponding state, including properties like wavelength. Model evaluations and verifications conclude the discussion.

  8. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  9. A Priori Knowledge and Probability Density Based Segmentation Method for Medical CT Image Sequences

    PubMed Central

    Tan, Hanqing; Yang, Benqiang

    2014-01-01

    This paper briefly introduces a novel segmentation strategy for CT images sequences. As first step of our strategy, we extract a priori intensity statistical information from object region which is manually segmented by radiologists. Then we define a search scope for object and calculate probability density for each pixel in the scope using a voting mechanism. Moreover, we generate an optimal initial level set contour based on a priori shape of object of previous slice. Finally the modified distance regularity level set method utilizes boundaries feature and probability density to conform final object. The main contributions of this paper are as follows: a priori knowledge is effectively used to guide the determination of objects and a modified distance regularization level set method can accurately extract actual contour of object in a short time. The proposed method is compared to other seven state-of-the-art medical image segmentation methods on abdominal CT image sequences datasets. The evaluated results demonstrate our method performs better and has the potential for segmentation in CT image sequences. PMID:24967402

  10. A bio-inspired method for the constrained shortest path problem.

    PubMed

    Wang, Hongping; Lu, Xi; Zhang, Xiaoge; Wang, Qing; Deng, Yong

    2014-01-01

    The constrained shortest path (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortest path problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method.

  11. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  12. Signal optimization, noise reduction, and systematic error compensation methods in long-path DOAS measurements

    NASA Astrophysics Data System (ADS)

    Simeone, Emilio; Donati, Alessandro

    1998-12-01

    The increment of the exploitable optical path represents one of the most important efforts in the differential optical absorption spectroscopy (DOAS) instruments improvement. The methods that allow long path measurements in the UV region are presented and discussed in this paper. These methods have been experimented in the new Italian DOAS instrument - SPOT - developed and manufactured by Kayser Italia. The system was equipped with a tele-controlled optical shuttle on the light source unit, allowing background radiation measurement. Wavelength absolute calibration of spectra by means of a collimated UV beam from a mercury lamp integrated in the telescope has been exploited. Besides, possible thermal effects on the dispersion coefficients of the holographic grating have been automatically compensated by means of a general non-linear fit during the spectral analysis session. Measurements in bistatic configuration have been performed in urban areas at 1300 m and 2200 m in three spectral windows from 245 to 380 nm. Measurements with these features are expected in the other spectral windows on path lengths ranging from about 5 to 10 km in urban areas. The DOAS technique can be used in field for very fast measurements in the 245-275 nm spectral range, on path lengths up to about 2500 m.

  13. A Simple Method for Solving the SVM Regularization Path for Semidefinite Kernels.

    PubMed

    Sentelle, Christopher G; Anagnostopoulos, Georgios C; Georgiopoulos, Michael

    2016-04-01

    The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al., while representing a significant theoretical contribution, does not work with semidefinite kernels. Ong et al. introduce a method improved SVMPath (ISVMP) algorithm, which addresses the semidefinite kernel; however, Singular Value Decomposition or QR factorizations are required, and a linear programming solver is required to find the next C value at each iteration. We introduce a simple implementation of the path-following algorithm that automatically handles semidefinite kernels without requiring a method to detect singular matrices nor requiring specialized factorizations or an external solver. We provide theoretical results showing how this method resolves issues associated with the semidefinite kernel as well as discuss, in detail, the potential sources of degeneracy and cycling and how cycling is resolved. Moreover, we introduce an initialization method for unequal class sizes based upon artificial variables that work within the context of the existing path-following algorithm and do not require an external solver. Experiments compare performance with the ISVMP algorithm introduced by Ong et al. and show that the proposed method is competitive in terms of training time while also maintaining high accuracy. PMID:26011894

  14. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E.; Valentine, John D.; Beauchamp, Brock R.

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  15. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    SciTech Connect

    Shafii, Mohammad Ali Meidianti, Rahma Wildian, Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  16. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous

  17. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    NASA Astrophysics Data System (ADS)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  18. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  19. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state. PMID:27497533

  20. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  1. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  2. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  3. Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2016-05-14

    Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained.

  4. Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2016-05-01

    Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained.

  5. Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2016-05-14

    Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained. PMID:27179465

  6. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  7. Selective enumeration method for identification of dominant failure paths of large structures

    SciTech Connect

    Shetty, N.K.

    1994-12-31

    Identification of dominant failure paths forms one of the major tasks in the system reliability evaluation of redundant structures using a failure-tree approach. The Branch-and-Bound method, commonly used for this purpose, is computationally very expensive when applied for the analysis of large structures, such as offshore jackets. A more efficient method is proposed in this paper which features selective enumeration and preferential branching. Based on the study of a number of influencing factors such as: component safety margins, structure`s closeness to failure, reliability indices of individual components and failure paths etc, a strategy for ordering of failure elements is derived. Enumeration from each node of the tree follows the order of importance of components and it is stopped once the truncation criterion is satisfied for one of the resulting branches. A truncation criterion is established in advance and is applied at each level of the analysis. The branch selection criterion accounts for statistical correlation between paths and the importance of the branch to system reliability. The proposed method has been implemented into the RASOS software and is demonstrated on a shallow-water jacket platform.

  8. Probability Theory

    NASA Astrophysics Data System (ADS)

    Jaynes, E. T.; Bretthorst, G. Larry

    2003-04-01

    Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.

  9. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method.

    PubMed

    Alani, Amir M; Faramarzi, Asaad

    2015-06-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  10. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well. PMID:26107223

  11. Neutral particle transport in plasma edge using transmission/escape probability (TEP) method

    NASA Astrophysics Data System (ADS)

    Zhang, Dingkang

    Neutral particles play an important role on the performance of tokamak plasmas. In this dissertation, the original TEP methodology has been extended to take into account linearly (DP1) and quadratically (DP2) anisotropic distributions of angular fluxes for calculations of transmission probabilities. Three approaches, subdivision of optically thick regions, expansion of collision sources and the diffusion approximation, have been developed and implemented to correct effects of the preferential probability of collided neutrals escaping back across the incident surface. Solving the diffusion equation via the finite element method has been shown to be the most computationally efficient and accurate for a broader range of Delta/lambda by comparisons with Monte Carlo simulations. The average neutral energy (ANE) approximation has been developed and implemented into the GTNEUT code. The average neutral energy approximation has been demonstrated to be more accurate than the original local ion temperature (LIT) approximation for optically thin regions. The simulations of the upgraded GTNEUT code excellently agree with the DEGAS predictions in DIII-D L-mode and H-mode discharges, and the results of both the codes are in a good agreement with the experimental measurements.

  12. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  13. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  14. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method

    PubMed Central

    Alani, Amir M.; Faramarzi, Asaad

    2015-01-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  15. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  16. Analytical error analysis of Clifford gates by the fault-path tracer method

    NASA Astrophysics Data System (ADS)

    Janardan, Smitha; Tomita, Yu; Gutiérrez, Mauricio; Brown, Kenneth R.

    2016-08-01

    We estimate the success probability of quantum protocols composed of Clifford operations in the presence of Pauli errors. Our method is derived from the fault-point formalism previously used to determine the success rate of low-distance error correction codes. Here we apply it to a wider range of quantum protocols and identify circuit structures that allow for efficient calculation of the exact success probability and even the final distribution of output states. As examples, we apply our method to the Bernstein-Vazirani algorithm and the Steane [[7,1,3

  17. Tunnel-construction methods and foraging path of a fossorial herbivore, Geomys bursarius

    USGS Publications Warehouse

    Andersen, Douglas C.

    1988-01-01

    The fossorial rodent Geomys bursarius excavates tunnels to find and gain access to belowground plant parts. This is a study of how the foraging path of this animal, as denoted by feeding-tunnel systems constructed within experimental gardens, reflects both adaptive behavior and constraints associated with the fossorial lifestyle. The principal method of tunnel construction involves the end-to-end linking of short, linear segments whose directionalities are bimodal, but symmetrically distributed about 0°. The sequence of construction of left- and right-directed segments is random, and segments tend to be equal in length. The resulting tunnel advances, zigzag-fashion, along a single heading. This linearity, and the tendency for branches to be orthogonal to the originating tunnel, are consistent with the search path predicted for a "harvesting animal" (Pyke, 1978) from optimal-foraging theory. A suite of physical and physiological constraints on the burrowing process, however, may be responsible for this geometric pattern. That is, by excavating in the most energy-efficient manner, G. bursarius automatically creates the basic components to an optimal-search path. The general search pattern was not influenced by habitat quality (plant density). Branch origins are located more often than expected at plants, demonstrating area-restricted search, a tactic commonly noted in aboveground foragers. The potential trade-offs between construction methods that minimize energy cost and those that minimize vulnerability to predators are discussed.

  18. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method

    PubMed Central

    Fogel, Allison R.; Rosenberg, Jason C.; Lehman, Frank M.; Kuperberg, Gina R.; Patel, Aniruddh D.

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5–9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such ‘authentic cadence’ melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of

  19. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    PubMed

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  20. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    PubMed

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  1. Acoustic method for measuring the sound speed of gases over small path lengths.

    PubMed

    Olfert, J S; Checkel, M D; Koch, C R

    2007-05-01

    Acoustic "phase shift" methods have been used in the past to accurately measure the sound speed of gases. In this work, a phase shift method for measuring the sound speed of gases over small path lengths is presented. We have called this method the discrete acoustic wave and phase detection (DAWPD) method. Experimental results show that the DAWPD method gives accurate (+/-3.2 ms) and predictable measurements that closely match theory. The sources of uncertainty in the DAWPD method are examined and it is found that ultrasonic reflections and changes in the frequency ratio of the transducers (the ratio of driving frequency to resonant frequency) can be major sources of error. Experimentally, it is shown how these sources of uncertainty can be minimized. PMID:17552851

  2. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    PubMed Central

    Huang, Cong; Ding, Pingjian

    2016-01-01

    MicroRNAs (miRNAs) are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction. PMID:27703979

  3. A chain-of-states acceleration method for the efficient location of minimum energy paths

    SciTech Connect

    Hernández, E. R. Herrero, C. P.; Soler, J. M.

    2015-11-14

    We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.

  4. Partially coherent scattering in stellar chromospheres. II - The first-order escape probability method. III - A second-order escape probability method

    NASA Technical Reports Server (NTRS)

    Gayley, K. G.

    1992-01-01

    Approximate analytic expressions are derived for resonance-line wing diagnostics, accounting for frequency redistribution effects, for homogeneous slabs, and slabs with a constant Planck function gradient. Resonance-line emission profiles from a simplified conceptual standpoint are described in order to elucidate the basic physical parameters of the line-forming layers prior to the performance of detailed numerical calculations. An approximate analytic expression is derived for the dependence on stellar surface gravity of the location of the Ca II and Mg II resonance-line profile peaks. An approximate radiative transfer equation using generalized second-order escape probabilities, applicable even in the presence of nearly coherent scattering in the damping wings of resonance lines, is derived. Approximate analytic solutions that can be applied in special regimes and achieve good agreement with accurate numerical results are found.

  5. Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li

    2012-01-01

    To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.

  6. Path Integral Coarse-Graining Replica Exchange Method for Enhanced Sampling.

    PubMed

    Peng, Yuxing; Cao, Zhen; Zhou, Ruhong; Voth, Gregory A

    2014-09-01

    An enhanced conformational space sampling method is developed that utilizes replica exchange molecular dynamics between a set of imaginary time Feynman path integral replicas, each having an increasing degree of contraction (or coarse-graining) of the quasi-particle or "polymer beads" in the evaluation of the isomorphic ring-polymer potential energy terms. However, there is no contraction of beads in the effectively harmonic kinetic energy terms. The final replica in this procedure is the fully contracted one in which the potential energy is evaluated only at the centroid of the beads-and hence it is the classical distribution in the centroid variable-while the initial replica has the full degree (or even a heightened degree, if desired) of quantum delocalization and tunneling in the physical potential by the polymer necklace beads. The exchange between the different ring-polymer ensembles is governed by the Metropolis criteria to guarantee detailed balance. The method is applied successfully to several model systems, ranging from one-dimensional prototype rough energy landscape models having analytical solutions to the more realistic alanine dipeptide. A detailed comparison with the classical temperature-based replica exchange method shows an improved efficiency of this new method in the classical conformational space sampling due to coupling with the fictitious path integral (quantum) replicas. PMID:26588508

  7. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible.

  8. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  9. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  10. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  11. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  12. Error reduction methods for integrated-path differential-absorption lidar measurements.

    PubMed

    Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T

    2012-07-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  13. Using computerized tomography to determine ionospheric structures. Part 2, A method using curved paths to increase vertical resolution

    SciTech Connect

    Vittitoe, C.N.

    1993-08-01

    A method is presented to unfold the two-dimensional vertical structure in electron density by using data on the total electron content for a series of paths through the ionosphere. The method uses a set of orthonormal basis functions to represent the vertical structure and takes advantage of curved paths and the eikonical equation to reduce the number of iterations required for a solution. Curved paths allow a more thorough probing of the ionosphere with a given set of transmitter and receiver positions. The approach can be directly extended to more complex geometries.

  14. SPACE PROPULSION SYSTEM PHASED-MISSION PROBABILITY ANALYSIS USING CONVENTIONAL PRA METHODS

    SciTech Connect

    Curtis Smith; James Knudsen

    2006-05-01

    As part of a series of papers on the topic of advance probabilistic methods, a benchmark phased-mission problem has been suggested. This problem consists of modeling a space mission using an ion propulsion system, where the mission consists of seven mission phases. The mission requires that the propulsion operate for several phases, where the configuration changes as a function of phase. The ion propulsion system itself consists of five thruster assemblies and a single propellant supply, where each thruster assembly has one propulsion power unit and two ion engines. In this paper, we evaluate the probability of mission failure using the conventional methodology of event tree/fault tree analysis. The event tree and fault trees are developed and analyzed using Systems Analysis Programs for Hands-on Integrated Reliability Evaluations (SAPHIRE). While the benchmark problem is nominally a "dynamic" problem, in our analysis the mission phases are modeled in a single event tree to show the progression from one phase to the next. The propulsion system is modeled in fault trees to account for the operation; or in this case, the failure of the system. Specifically, the propulsion system is decomposed into each of the five thruster assemblies and fed into the appropriate N-out-of-M gate to evaluate mission failure. A separate fault tree for the propulsion system is developed to account for the different success criteria of each mission phase. Common-cause failure modeling is treated using traditional (i.e., parametrically) methods. As part of this paper, we discuss the overall results in addition to the positive and negative aspects of modeling dynamic situations with non-dynamic modeling techniques. One insight from the use of this conventional method for analyzing the benchmark problem is that it requires significant manual manipulation to the fault trees and how they are linked into the event tree. The conventional method also requires editing the resultant cut sets to

  15. A method to measure the effective gas path length in the environmental or variable pressure scanning electron microscope.

    PubMed

    Gauvin, Raynald; Griffin, Brendan; Nockolds, Clive; Phillips, Mathew; Joy, David C

    2002-01-01

    A simple method is described to determine the effective gas path length when incident electrons scatter in the gas above the specimen. This method is based on the measurement of a characteristic x-ray line emitted from a region close to the incident beam. From various experimental measurements performed on various microscopes, it is shown that the effective gas path length may increase with the chamber pressure and that it is also often dependent of the type of x-ray bullet.

  16. Systems and methods for managing shared-path instrumentation and irradiation targets in a nuclear reactor

    DOEpatents

    Heinold, Mark R.; Berger, John F.; Loper, Milton H.; Runkle, Gary A.

    2015-12-29

    Systems and methods permit discriminate access to nuclear reactors. Systems provide penetration pathways to irradiation target loading and offloading systems, instrumentation systems, and other external systems at desired times, while limiting such access during undesired times. Systems use selection mechanisms that can be strategically positioned for space sharing to connect only desired systems to a reactor. Selection mechanisms include distinct paths, forks, diverters, turntables, and other types of selectors. Management methods with such systems permits use of the nuclear reactor and penetration pathways between different systems and functions, simultaneously and at only distinct desired times. Existing TIP drives and other known instrumentation and plant systems are useable with access management systems and methods, which can be used in any nuclear plant with access restrictions.

  17. Research on a UAV path planning method for ground observation based on threat sources

    NASA Astrophysics Data System (ADS)

    Yan, Hao; Fan, Xing; Xia, Xuezhi; Lin, Linshu

    2008-12-01

    The path planning method is one of the main research directions in current UAV(unmanned aerial vehicle) technologies. In this paper we perform analyses on the adversarial environment which may be broken through during the UAV mission for ground observation, and carry out the grade classification according to the threat level. On the basis of genetic algorithm, the encoding method of dimension reduction and direct quantization is used to combine the threat value of each leg with the flight distance, so as to construct the fitness evaluation function based on the threat amount and design the algorithm. This method is proven to be able to converge effectively and quickly via the simulation experiments, which meet the threat restriction and applicability of UAV in route planning.

  18. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  19. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  20. Particle path tracking method in two- and three-dimensional continuously rotating detonation engines

    NASA Astrophysics Data System (ADS)

    Zhou, Rui; Wu, Dan; Liu, Yan; Wang, Jian-Ping

    2014-12-01

    The particle path tracking method is proposed and used in two-dimensional (2D) and three-dimensional (3D) numerical simulations of continuously rotating detonation engines (CRDEs). This method is used to analyze the combustion and expansion processes of the fresh particles, and the thermodynamic cycle process of CRDE. In a 3D CRDE flow field, as the radius of the annulus increases, the no-injection area proportion increases, the non-detonation proportion decreases, and the detonation height decreases. The flow field parameters on the 3D mid annulus are different from in the 2D flow field under the same chamber size. The non-detonation proportion in the 3D flow field is less than in the 2D flow field. In the 2D and 3D CRDE, the paths of the flow particles have only a small fluctuation in the circumferential direction. The numerical thermodynamic cycle processes are qualitatively consistent with the three ideal cycle models, and they are right in between the ideal F—J cycle and ideal ZND cycle. The net mechanical work and thermal efficiency are slightly smaller in the 2D simulation than in the 3D simulation. In the 3D CRDE, as the radius of the annulus increases, the net mechanical work is almost constant, and the thermal efficiency increases. The numerical thermal efficiencies are larger than F—J cycle, and much smaller than ZND cycle.

  1. Using Multiple Methods to teach ASTR 101 students the Path of the Sun and Shadows

    NASA Astrophysics Data System (ADS)

    D'Cruz, Noella L.

    2015-01-01

    It seems surprising that non-science major introductory astronomy students find the daily path of the Sun and shadows created by the Sun challenging to learn even though both can be easily observed (provided students do not look directly at the Sun). In order for our students to master the relevant concepts, we have usually used lecture, a lecture tutorial (from Prather, et al.) followed by think-pair-share questions, a planetarium presentation and an animation from the Nebraska Astronomy Applet Project to teach these topics. We cover these topics in a lecture-only, one semester introductory astronomy course at Joliet Junior College. Feedback from our Spring 2014 students indicated that the planetarium presentation was the most helpful in learning the path of the Sun while none of the four teaching methods was helpful when learning about shadows cast by the Sun. Our students did not find the lecture tutorial to be much help even though such tutorials have been proven to promote deep conceptual change. In Fall 2014, we continued to use these four methods, but we modified how we teach both topics so our students could gain more from the tutorial. We hoped our modifications would cause students to have a better overall grasp of the concepts. After our regular lecture, we gave a shorter than usual planetarium presentation on the path of the Sun and we asked students to work through a shadow activity from Project Astro materials. Then students completed the lecture tutorial and some think-pair-share questions. After this, we asked students to predict the Sun's path on certain days of the year and we used the planetarium projector to show them how well their predictions matched up. We ended our coverage of these topics by asking students a few more think-pair-share questions. In our poster, we will present our approach to teaching these topics in Fall 2014, how our Fall 2014 students feel about our teaching strategies and how they fared on related test questions.

  2. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  3. Method of Transverse Displacements Formulation for Calculating the HF Radio Wave Propagation Paths. Statement of the Problem and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Bessarab, P. F.; Klimenko, M. V.

    2016-06-01

    Fundamentals of the method of transverse displacements for calculating the HF radio-wave propagation paths are presented. The method is based on the direct variational principle for the optical path functional, but is not reduced to solving the Euler—Lagrange equations. Instead, the initial guess given by an ordered set of points is transformed successively into a ray path, while its endpoints corresponding to the positions of the transmitter and the receiver are kept fixed throughout the entire iteration process. The results of calculation by the method of transverse displacements are compared with known analytical solutions. The importance of using only transverse displacements of the ray path in the optimization procedure is also demonstrated.

  4. Patients and Methods of the PATH Biobank - A Resource for Breast Cancer Research.

    PubMed

    Waldmann, A; Anzeneder, T; Katalinic, A

    2014-04-01

    Introduction: The foundation PATH (Patients' Tumour Bank of Hope) collects in a tumour bank samples of blood, tumour, and tumour-near normal tissue from breast cancer patients and supplements them systematically with health-care data. Material and Methods: For patients from the diagnosis years 2006-2009 quantitative data were evaluated with the help of mean values and standard deviations while for qualitative data absolute and relative incidences were assessed. Demographic and clinical features of women who used different numbers of information sources were tested for statistical significance by means of ANOVA and χ(2) tests. The benchmark report of the WBC and two DMP reports were used to compare oncological care. Results: For research purposes tumour tissue samples are available for 59 % of the cases, normal tissue for 62 % and blood serum samples for 92 %. From 3573 women (diagnoses 2006-2009), a total of 2697 women (75.5 %) took part in follow-up. The characteristics of the follow-up patients did not relevantly differ from those of all the patients. The responsible physician was named as the most important source of information about the disease. Young women in particular consulted several sources and also used the internet to obtain information. Discussion: Compared with data on therapy from WBC and the DMP breast cancer in Bavaria or, respectively, North Rhineland reports, the PATH patients represent an only slightly selected sample. The PATH biobank is a (still) poorly used data and sample source, which is made available upon request and positive evaluation of the study protocol. Thus, it is possible to address current questions in a short time without having to undertake extensive recruiting procedures.

  5. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    PubMed

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results. PMID:15648621

  6. Neutron Flux Interpolation with Finite Element Method in the Nuclear Fuel Cell Calculation using Collision Probability Method

    SciTech Connect

    Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti

    2010-12-23

    Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k{sub eff} and other parameters are investigated.

  7. New method for path-length equalization of long single-mode fibers for interferometry

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Monnier, J. D.; Ozdowy, K.; Woillez, J.; Perrin, G.

    2014-07-01

    The ability to use single mode (SM) fibers for beam transport in optical interferometry offers practical advantages over conventional long vacuum pipes. One challenge facing fiber transport is maintaining constant differential path length in an environment where environmental thermal variations can lead to cm-level variations from day to night. We have fabricated three composite cables of length 470 m, each containing 4 copper wires and 3 SM fibers that operate at the astronomical H band (1500-1800 nm). Multiple fibers allow us to test performance of a circular core fiber (SMF28), a panda-style polarization-maintaining (PM) fiber, and a lastly a specialty dispersion-compensated PM fiber. We will present experimental results using precision electrical resistance measurements of the of a composite cable beam transport system. We find that the application of 1200 W over a 470 m cable causes the optical path difference in air to change by 75 mm (+/- 2 mm) and the resistance to change from 5.36 to 5.50Ω. Additionally, we show control of the dispersion of 470 m of fiber in a single polarization using white light interference fringes (λc=1575 nm, Δλ=75 nm) using our method.

  8. A probability evaluation method of early deterioration condition for the critical components of wind turbine generator systems

    NASA Astrophysics Data System (ADS)

    Hu, Yaogang; Li, Hui; Liao, Xinglin; Song, Erbing; Liu, Haitao; Chen, Z.

    2016-08-01

    This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration-based condition monitoring methods. Considering the its thermal inertia and strong anti-interference capacity, temperature characteristic parameters as a deterioration indication cannot be adequately disturbed by the uncontrollable noise and uncertainty nature of wind. This paper provides a probability evaluation method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.

  9. Indicator and probability kriging methods for delineating Cu, Fe, and Mn contamination in groundwater of Najafgarh Block, Delhi, India.

    PubMed

    Adhikary, Partha Pratim; Dash, Ch Jyotiprava; Bej, Renukabala; Chandrasekharan, H

    2011-05-01

    Two non-parametric kriging methods such as indicator kriging and probability kriging were compared and used to estimate the probability of concentrations of Cu, Fe, and Mn higher than a threshold value in groundwater. In indicator kriging, experimental semivariogram values were fitted well in spherical model for Fe and Mn. Exponential model was found to be best for all the metals in probability kriging and for Cu in indicator kriging. The probability maps of all the metals exhibited an increasing risk of pollution over the entire study area. Probability kriging estimator incorporates the information about order relations which the indicator kriging does not, has improved the accuracy of estimating the probability of metal concentrations in groundwater being higher than a threshold value. Evaluation of these two spatial interpolation methods through mean error (ME), mean square error (MSE), kriged reduced mean error (KRME), and kriged reduced mean square error (KRMSE) showed 3.52% better performance of probability kriging over indicator kriging. The combined result of these two kriging method indicated that on an average 26.34%, 65.36%, and 99.55% area for Cu, Fe, and Mn, respectively, are coming under the risk zone with probability of exceedance from a cutoff value is 0.6 or more. The groundwater quality map pictorially represents groundwater zones as "desirable" or "undesirable" for drinking. Thus the geostatistical approach is very much helpful for the planners and decision makers to devise policy guidelines for efficient management of the groundwater resources so as to enhance groundwater recharge and minimize the pollution level.

  10. Indicator and probability kriging methods for delineating Cu, Fe, and Mn contamination in groundwater of Najafgarh Block, Delhi, India.

    PubMed

    Adhikary, Partha Pratim; Dash, Ch Jyotiprava; Bej, Renukabala; Chandrasekharan, H

    2011-05-01

    Two non-parametric kriging methods such as indicator kriging and probability kriging were compared and used to estimate the probability of concentrations of Cu, Fe, and Mn higher than a threshold value in groundwater. In indicator kriging, experimental semivariogram values were fitted well in spherical model for Fe and Mn. Exponential model was found to be best for all the metals in probability kriging and for Cu in indicator kriging. The probability maps of all the metals exhibited an increasing risk of pollution over the entire study area. Probability kriging estimator incorporates the information about order relations which the indicator kriging does not, has improved the accuracy of estimating the probability of metal concentrations in groundwater being higher than a threshold value. Evaluation of these two spatial interpolation methods through mean error (ME), mean square error (MSE), kriged reduced mean error (KRME), and kriged reduced mean square error (KRMSE) showed 3.52% better performance of probability kriging over indicator kriging. The combined result of these two kriging method indicated that on an average 26.34%, 65.36%, and 99.55% area for Cu, Fe, and Mn, respectively, are coming under the risk zone with probability of exceedance from a cutoff value is 0.6 or more. The groundwater quality map pictorially represents groundwater zones as "desirable" or "undesirable" for drinking. Thus the geostatistical approach is very much helpful for the planners and decision makers to devise policy guidelines for efficient management of the groundwater resources so as to enhance groundwater recharge and minimize the pollution level. PMID:20686840

  11. Method- and species-specific detection probabilities of fish occupancy in Arctic lakes: Implications for design and management

    USGS Publications Warehouse

    Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.

    2013-01-01

    Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.

  12. A reliable acoustic path: Physical properties and a source localization method

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo

    2012-12-01

    The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.

  13. Path-integral Monte Carlo method for Rényi entanglement entropies.

    PubMed

    Herdman, C M; Inglis, Stephen; Roy, P-N; Melko, R G; Del Maestro, A

    2014-07-01

    We introduce a quantum Monte Carlo algorithm to measure the Rényi entanglement entropies in systems of interacting bosons in the continuum. This approach is based on a path-integral ground state method that can be applied to interacting itinerant bosons in any spatial dimension with direct relevance to experimental systems of quantum fluids. We demonstrate how it may be used to compute spatial mode entanglement, particle partitioned entanglement, and the entanglement of particles, providing insights into quantum correlations generated by fluctuations, indistinguishability, and interactions. We present proof-of-principle calculations and benchmark against an exactly soluble model of interacting bosons in one spatial dimension. As this algorithm retains the fundamental polynomial scaling of quantum Monte Carlo when applied to sign-problem-free models, future applications should allow for the study of entanglement entropy in large-scale many-body systems of interacting bosons.

  14. Unsteady panel method for flows with multiple bodies moving along various paths

    NASA Technical Reports Server (NTRS)

    Richason, Thomas F.; Katz, Joseph; Ashby, Dale L.

    1993-01-01

    A potential flow based three-dimensional panel method was modified to treat time dependent conditions in which several submerged bodies can move within the fluid along different trajectories. This modification was accomplished by formulating the momentary solution in an inertial frame-of-reference, attached to the undisturbed stationary fluid. Consequently, the numerical interpretation of the multiple-body, solid-surface boundary condition and the viscous wake rollup was considerably simplified. The unsteady capability of this code was validated by comparing computed and experimental results for a finite wing undergoing pitch oscillations. In order to demonstrate the multicomponent capability, computations were made for two wings following closely intersecting paths (e.g., to avoid mid air collisions) and for a flow field with relative rotation (e.g., helicopter-rotor/fuselage interaction). Results were compared to experimental data when such data was available.

  15. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  16. Delivery Path Length and Holding Tree Minimization Method of Securities Delivery among the Registration Agencies Connected as Non-Tree

    NASA Astrophysics Data System (ADS)

    Shimamura, Atsushi; Moritsu, Toshiyuki; Someya, Harushi

    To dematerialize the securities such as stocks or cooporate bonds, the securities were registered to account in the registration agencies which were connected as tree. This tree structure had the advantage in the management of the securities those were issued large amount and number of brands of securities were limited. But when the securities such as account receivables or advance notes are dematerialized, number of brands of the securities increases extremely. In this case, the management of securities with tree structure becomes very difficult because of the concentration of information to root of the tree. To resolve this problem, using the graph structure is assumed instead of the tree structure. When the securities are kept with tree structure, the delivery path of securities is unique, but when securities are kept with graph structure, path of delivery is not unique. In this report, we describe the requirement of the delivery path of securities, and we describe selecting method of the path.

  17. Path durations for use in the stochastic‐method simulation of ground motions

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.

    2014-01-01

    The stochastic method of ground‐motion simulation assumes that the energy in a target spectrum is spread over a duration DT. DT is generally decomposed into the duration due to source effects (DS) and to path effects (DP). For the most commonly used source, seismological theory directly relates DS to the source corner frequency, accounting for the magnitude scaling of DT. In contrast, DP is related to propagation effects that are more difficult to represent by analytic equations based on the physics of the process. We are primarily motivated to revisit DT because the function currently employed by many implementations of the stochastic method for active tectonic regions underpredicts observed durations, leading to an overprediction of ground motions for a given target spectrum. Further, there is some inconsistency in the literature regarding which empirical duration corresponds to DT. Thus, we begin by clarifying the relationship between empirical durations and DT as used in the first author’s implementation of the stochastic method, and then we develop a new DP relationship. The new DP function gives significantly longer durations than in the previous DP function, but the relative contribution of DP to DT still diminishes with increasing magnitude. Thus, this correction is more important for small events or subfaults of larger events modeled with the stochastic finite‐fault method.

  18. A high detection probability method for Gm-APD photon counting laser radar

    NASA Astrophysics Data System (ADS)

    Zhang, Zi-jing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jian-zhong

    2013-08-01

    Since Geiger mode Avalanche Photodiode (GmAPD) device was applied in laser radar system, the performance of system has been enhanced due to the ultra-high sensitivity of GmAPD, even responding a single photon. However, the background noise makes ultra-high sensitive GmAPD produce false alarms, which severely impacts on the detection of laser radar system based on Gm-APD and becomes an urgent problem which needs to be solved. To address this problem, a few times accumulated two-GmAPDs strategy is proposed in this paper. Finally, an experimental measurement is made under the background noise in sunny day. The results show a few times accumulated two- GmAPDs strategy can improve the detection probability and reduce the false alarm probability, and obtain a clear 3D image of target.

  19. The use of the stationary phase method as a mathematical tool to determine the path of optical beams

    NASA Astrophysics Data System (ADS)

    Carvalho, Silvânia A.; De Leo, Stefano

    2015-03-01

    We use the stationary phase method to determine the paths of optical beams that propagate through a dielectric block. In the presence of partial internal reflection, we recover the geometrical result obtained by using Snell's law. For total internal reflection, the stationary phase method overreaches Snell's law, predicting the Goos-Hänchen shift.

  20. A novel method to identify herds with an increased probability of disease introduction due to animal trade.

    PubMed

    Frössling, Jenny; Nusinovici, Simon; Nöremark, Maria; Widgren, Stefan; Lindberg, Ann

    2014-11-15

    In the design of surveillance, there is often a desire to target high risk herds. Such risk-based approaches result in better allocation of resources and improve the performance of surveillance activities. For many contagious animal diseases, movement of live animals is a main route of transmission, and because of this, herds that purchase many live animals or have a large contact network due to trade can be seen as a high risk stratum of the population. This paper presents a new method to assess herd disease risk in animal movement networks. It is an improvement to current network measures that takes direction, temporal order, and also movement size and probability of disease into account. In the study, the method was used to calculate a probability of disease ratio (PDR) of herds in simulated datasets, and of real herds based on animal movement data from dairy herds included in a bulk milk survey for Coxiella burnetii. Known differences in probability of disease are easily incorporated in the calculations and the PDR was calculated while accounting for regional differences in probability of disease, and also by applying equal probability of disease throughout the population. Each herd's increased probability of disease due to purchase of animals was compared to both the average herd and herds within the same risk stratum. The results show that the PDR is able to capture the different circumstances related to disease prevalence and animal trade contact patterns. Comparison of results based on inclusion or exclusion of differences in risk also highlights how ignoring such differences can influence the ability to correctly identify high risk herds. The method shows a potential to be useful for risk-based surveillance, in the classification of herds in control programmes or to represent influential contacts in risk factor studies.

  1. A novel method to identify herds with an increased probability of disease introduction due to animal trade.

    PubMed

    Frössling, Jenny; Nusinovici, Simon; Nöremark, Maria; Widgren, Stefan; Lindberg, Ann

    2014-11-15

    In the design of surveillance, there is often a desire to target high risk herds. Such risk-based approaches result in better allocation of resources and improve the performance of surveillance activities. For many contagious animal diseases, movement of live animals is a main route of transmission, and because of this, herds that purchase many live animals or have a large contact network due to trade can be seen as a high risk stratum of the population. This paper presents a new method to assess herd disease risk in animal movement networks. It is an improvement to current network measures that takes direction, temporal order, and also movement size and probability of disease into account. In the study, the method was used to calculate a probability of disease ratio (PDR) of herds in simulated datasets, and of real herds based on animal movement data from dairy herds included in a bulk milk survey for Coxiella burnetii. Known differences in probability of disease are easily incorporated in the calculations and the PDR was calculated while accounting for regional differences in probability of disease, and also by applying equal probability of disease throughout the population. Each herd's increased probability of disease due to purchase of animals was compared to both the average herd and herds within the same risk stratum. The results show that the PDR is able to capture the different circumstances related to disease prevalence and animal trade contact patterns. Comparison of results based on inclusion or exclusion of differences in risk also highlights how ignoring such differences can influence the ability to correctly identify high risk herds. The method shows a potential to be useful for risk-based surveillance, in the classification of herds in control programmes or to represent influential contacts in risk factor studies. PMID:25139432

  2. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  3. Burnup calculation by the method of first-flight collision probabilities using average chords prior to the first collision

    NASA Astrophysics Data System (ADS)

    Karpushkin, T. Yu.

    2012-12-01

    A technique to calculate the burnup of materials of cells and fuel assemblies using the matrices of first-flight neutron collision probabilities rebuilt at a given burnup step is presented. A method to rebuild and correct first collision probability matrices using average chords prior to the first neutron collision, which are calculated with the help of geometric modules of constructed stochastic neutron trajectories, is described. Results of calculation of the infinite multiplication factor for elementary cells with a modified material composition compared to the reference one as well as calculation of material burnup in the cells and fuel assemblies of a VVER-1000 are presented.

  4. Simulation of thermal ionization in a dense helium plasma by the Feynman path integral method

    NASA Astrophysics Data System (ADS)

    Shevkunov, S. V.

    2011-04-01

    The region of equilibrium states is studied where the quantum nature of the electron component and a strong nonideality of a plasma play a key role. The problem of negative signs in the calculation of equilibrium averages a system of indistinguishable quantum particles with a spin is solved in the macroscopic limit. It is demonstrated that the calculation can be conducted up to a numerical result. The complete set of symmetrized basis wave functions is constructed based on the Young symmetry operators. The combinatorial weight coefficients of the states corresponding to different graphs of connected Feynman paths in multiparticle systems are calculated by the method of random walk over permutation classes. The kinetic energy is calculated using a viral estimator at a finite pressure in a statistical ensemble with flexible boundaries. Based on the methods developed in the paper, the computer simulation is performed for a dense helium plasma in the temperature range from 30000 to 40000 K. The equation of state, internal energy, ionization degree, and structural characteristic of the plasma are calculated in terms of spatial correlation functions. The parameters of a pseudopotential plasma model are estimated.

  5. Simulation of thermal ionization in a dense helium plasma by the Feynman path integral method

    SciTech Connect

    Shevkunov, S. V.

    2011-04-15

    The region of equilibrium states is studied where the quantum nature of the electron component and a strong nonideality of a plasma play a key role. The problem of negative signs in the calculation of equilibrium averages a system of indistinguishable quantum particles with a spin is solved in the macroscopic limit. It is demonstrated that the calculation can be conducted up to a numerical result. The complete set of symmetrized basis wave functions is constructed based on the Young symmetry operators. The combinatorial weight coefficients of the states corresponding to different graphs of connected Feynman paths in multiparticle systems are calculated by the method of random walk over permutation classes. The kinetic energy is calculated using a viral estimator at a finite pressure in a statistical ensemble with flexible boundaries. Based on the methods developed in the paper, the computer simulation is performed for a dense helium plasma in the temperature range from 30000 to 40000 K. The equation of state, internal energy, ionization degree, and structural characteristic of the plasma are calculated in terms of spatial correlation functions. The parameters of a pseudopotential plasma model are estimated.

  6. Monte Carlo method for computing density of states and quench probability of potential energy and enthalpy landscapes.

    PubMed

    Mauro, John C; Loucks, Roger J; Balakrishnan, Jitendra; Raghavan, Srikanth

    2007-05-21

    The thermodynamics and kinetics of a many-body system can be described in terms of a potential energy landscape in multidimensional configuration space. The partition function of such a landscape can be written in terms of a density of states, which can be computed using a variety of Monte Carlo techniques. In this paper, a new self-consistent Monte Carlo method for computing density of states is described that uses importance sampling and a multiplicative update factor to achieve rapid convergence. The technique is then applied to compute the equilibrium quench probability of the various inherent structures (minima) in the landscape. The quench probability depends on both the potential energy of the inherent structure and the volume of its corresponding basin in configuration space. Finally, the methodology is extended to the isothermal-isobaric ensemble in order to compute inherent structure quench probabilities in an enthalpy landscape.

  7. Comparing Two Different Methods to Evaluate Convariance-Matrix of Debris Orbit State in Collision Probability Estimation

    NASA Astrophysics Data System (ADS)

    Cheng, Haowen; Liu, Jing; Xu, Yang

    The evaluation of convariance-matrix is an inevitable step when estimating collision probability based on the theory. Generally, there are two different methods to compute convariance-matrix. One is so-called Tracking-Delta-Fitting method, first introduced when estimating the collision probability using TLE catalogue data, in which convariance-matrix is evaluated by fitting series of differences between propagated orbits of formal data and updated orbit data. In the second method, convariance-matrix is evaluated in the process of orbit determination. Both of the methods has there difficulties when introduced in collision probability estimation. In the first method, the value of convariance-matrix is evaluated based only on historical orbit data, ignoring information of latest orbit determination. As a result, the accuracy of the method strongly depends on the stability of convariance-matrix of latest updated orbit. In the second method, the evaluation of convariance-matrix is acceptable when the determined orbit satisfies weighted-least-square estimation, depending on the accuracy of observation error convariance, which is hard to obtain in real application, evaluated by analyzing the residuals of orbit determination in our research. In this paper we provided numerical tests to compare these two methods. A simulation of cataloguing objects in LEO, MEO and GEO regions has been carried out for a time span of 3 months. The influence of orbit maneuver has been included in GEO objects cataloguing simulation. For LEO objects cataloguing, the effect of atmospheric density variation has also been considered. At the end of the paper accuracies of evaluated convariance-matrix and estimated collision probability have been tested and compared.

  8. Unsteady panel method for flows with multiple bodies moving along various paths

    NASA Technical Reports Server (NTRS)

    Richardson, Thomas F.; Katz, Joseph; Ashby, Dale L.

    1994-01-01

    A potential flow based three-dimensional panel method was modified to treat time-dependent conditions in which several submerged bodies can move within the fluid along different trajectories. This modification was accomplished by formulating the momentary solution in an inertial frame of reference, attached to the undisturbed stationary fluid. Consequently, the numerical interpretation of the multiple-body, solid-surface boundary condition and the viscous wake rollup was considerably simplified. The usteady capability of this code was calibrated and validated by comparing computed results with closed-form analytical results available for an airfoil, which was impulsively set into a constant speed forward motion. To demonstrate the multicomponent capability, computations were made for two wings following closely intersecting paths (i.e., simulations aimed at avoiding mid-air collisions) and for a flowfield with relative rotation (i.e., the case of a helicopter rotor rotating relative to the fuselage). Computed results for the cases were compared to experimental data, when such data was available.

  9. Estimation of probable maximum precipitation at the Kielce Upland (Poland) using meteorological method

    NASA Astrophysics Data System (ADS)

    Suligowski, Roman

    2014-05-01

    Probable Maximum Precipitation based upon the physical mechanisms of precipitation formation at the Kielce Upland. This estimation stems from meteorological analysis of extremely high precipitation events, which occurred in the area between 1961 and 2007 causing serious flooding from rivers that drain the entire Kielce Upland. Meteorological situation has been assessed drawing on the synoptic maps, baric topography charts, satellite and radar images as well as the results of meteorological observations derived from surface weather observation stations. Most significant elements of this research include the comparison between distinctive synoptic situations over Europe and subsequent determination of typical rainfall generating mechanism. This allows the author to identify the source areas of air masses responsible for extremely high precipitation at the Kielce Upland. Analysis of the meteorological situations showed, that the source areas for humid air masses which cause the largest rainfalls at the Kielce Upland are the area of northern Adriatic Sea and the north-eastern coast of the Black Sea. Flood hazard at the Kielce Upland catchments was triggered by daily precipitation of over 60 mm. The highest representative dew point temperature in source areas of warm air masses (these responsible for high precipitation at the Kielce Upland) exceeded 20 degrees Celsius with a maximum of 24.9 degrees Celsius while precipitable water amounted to 80 mm. The value of precipitable water is also used for computation of factors featuring the system, namely the mass transformation factor and the system effectiveness factor. The mass transformation factor is computed based on precipitable water in the feeding mass and precipitable water in the source area. The system effectiveness factor (as the indicator of the maximum inflow velocity and the maximum velocity in the zone of front or ascending currents, forced by orography) is computed from the quotient of precipitable water in

  10. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  11. Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression

    NASA Astrophysics Data System (ADS)

    Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.

    2013-10-01

    Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.

  12. A novel path generation method of onsite 5-axis surface inspection using the dual-cubic NURBS representation

    NASA Astrophysics Data System (ADS)

    Li, Wen-long; Wang, Gang; Zhang, Gang; Pang, Chang-tao; Yin, Zhou-pin

    2016-09-01

    Onsite surface inspection with a touch probe or a laser scanner is a promising technique for efficiently evaluating surface profile error. The existing work of 5-axis inspection path generation bears a serious drawback, however, as there is a drastic orientation change of the inspection axis. Such a sudden change may exceed the stringent physical limit on the speed and acceleration of the rotary motions of the machine tool. In this paper, we propose a novel path generation method for onsite 5-axis surface inspection. The accessibility cones are defined and used to generate alternative interference-free inspection directions. Then, the control points are optimally calculated to obtain the dual-cubic non-Uniform rational B-splines (NURBS) curves, which respectively determine the path points and the axis vectors in an inspection path. The generated inspection path is smooth and non-interference, which deals with the ‘mutation and shake’ problems and guarantees a stable speed and acceleration of machine tool rotary motions. Its feasibility and validity is verified by the onsite inspection experiments of impeller blade.

  13. Points on the Path to Probability.

    ERIC Educational Resources Information Center

    Kiernan, James F.

    2001-01-01

    Presents the problem of points and the development of the binomial triangle, or Pascal's triangle. Examines various attempts to solve this problem to give students insight into the nature of mathematical discovery. (KHR)

  14. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  15. Alternative to the steady-state method: derivation of reaction rates from first-passage times and pathway probabilities.

    PubMed Central

    Ninio, J

    1987-01-01

    An alternative method for deriving rate equations in enzyme kinetics is presented. An enzyme is followed as it moves along the various pathways allowed by the reaction scheme. The times spent in various sections of the scheme and the pathway probabilities are computed, using simple rules. The rate equation obtains as a function of times and probabilities. The results are equivalent to those provided by the steady-state formalism. While the latter applies uniformly to all schemes, the formalism presented here requires adaptation to each additional class of schemes. However, it has the merit of allowing one to leave unspecified many details of the scheme, including topological ones. Furthermore, it allows one to decompose a scheme into subschemes, analyze the parts separately, and use the intermediate results to derive the rate equation of the complete scheme. The method is applied here to derive general equations for one- and two-entry site enzymes. PMID:3468503

  16. Cluster membership probabilities from proper motions and multi-wavelength photometric catalogues. I. Method and application to the Pleiades cluster

    NASA Astrophysics Data System (ADS)

    Sarro, L. M.; Bouy, H.; Berihuete, A.; Bertin, E.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Barrado, D.; Solano, E.

    2014-03-01

    Context. With the advent of deep wide surveys, large photometric and astrometric catalogues of literally all nearby clusters and associations have been produced. The unprecedented accuracy and sensitivity of these data sets and their broad spatial, temporal and wavelength coverage make obsolete the classical membership selection methods that were based on a handful of colours and luminosities. We present a new technique designed to take full advantage of the high dimensionality (photometric, astrometric, temporal) of such a survey to derive self-consistent and robust membership probabilities of the Pleiades cluster. Aims: We aim at developing a methodology to infer membership probabilities to the Pleiades cluster from the DANCe multidimensional astro-photometric data set in a consistent way throughout the entire derivation. The determination of the membership probabilities has to be applicable to censored data and must incorporate the measurement uncertainties into the inference procedure. Methods: We use Bayes' theorem and a curvilinear forward model for the likelihood of the measurements of cluster members in the colour-magnitude space, to infer posterior membership probabilities. The distribution of the cluster members proper motions and the distribution of contaminants in the full multidimensional astro-photometric space is modelled with a mixture-of-Gaussians likelihood. Results: We analyse several representation spaces composed of the proper motions plus a subset of the available magnitudes and colour indices. We select two prominent representation spaces composed of variables selected using feature relevance determination techniques based in Random Forests, and analyse the resulting samples of high probability candidates. We consistently find lists of high probability (p > 0.9975) candidates with ≈1000 sources, 4 to 5 times more than obtained in the most recent astro-photometric studies of the cluster. Conclusions: Multidimensional data sets require

  17. Prediction of rockburst probability given seismic energy and factors defined by the expert method of hazard evaluation (MRG)

    NASA Astrophysics Data System (ADS)

    Kornowski, Jerzy; Kurzeja, Joanna

    2012-04-01

    In this paper we suggest that conditional estimator/predictor of rockburst probability (and rockburst hazard, P T(t)) can be approximated with the formula P T(t) = P 1(θ 1)…P N(θ N)·P dynT(t), where P dynT(t) is a time-dependent probability of rockburst given only the predicted seismic energy parameters, while P i(θ i) are amplifying coefficients due to local geologic and mining conditions, as defined by the Expert Method of (rockburst) Hazard Evaluation (MRG) known in the Polish mining industry. All the elements of the formula are (approximately) calculable (on-line) and the resulting P T value satisfies inequalities 0 ≤ P T(t) ≤ 1. As a result, the hazard space (0-1) can be always divided into smaller subspaces (e.g., 0-10-5, 10-5-10-4, 10-4-10-3, 10-3-1), possibly named with symbols (e.g., A, B, C, D, …) called "hazard states" — which saves the prediction users from worrying of probabilities. The estimator P T can be interpreted as a formal statement of (reformulated) Comprehensive Method of Rockburst State of Hazard Evaluation, well known in Polish mining industry. The estimator P T is natural, logically consistent and physically interpretable. Due to full formalization, it can be easily generalized, incorporating relevant information from other sources/methods.

  18. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley P.

    2004-01-01

    Propulsion ground test facilities face the daily challenges of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Due to budgetary and schedule constraints, NASA and industry customers are pushing to test more components, for less money, in a shorter period of time. As these new rocket engine component test programs are undertaken, the lack of technology maturity in the test articles, combined with pushing the test facilities capabilities to their limits, tends to lead to an increase in facility breakdowns and unsuccessful tests. Over the last five years Stennis Space Center's propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and broken numerous test facility and test article parts. While various initiatives have been implemented to provide better propulsion test techniques and improve the quality, reliability, and maintainability of goods and parts used in the propulsion test facilities, unexpected failures during testing still occur quite regularly due to the harsh environment in which the propulsion test facilities operate. Previous attempts at modeling the lifecycle of a propulsion component test project have met with little success. Each of the attempts suffered form incomplete or inconsistent data on which to base the models. By focusing on the actual test phase of the tests project rather than the formulation, design or construction phases of the test project, the quality and quantity of available data increases dramatically. A logistic regression model has been developed form the data collected over the last five years, allowing the probability of successfully completing a rocket propulsion component test to be calculated. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),..,X(sub k) to a binary or dichotomous

  19. New connection method for isolating and disinfecting intraluminal path during peritoneal dialysis solution-exchange procedures.

    PubMed

    Grabowy, R S; Kelley, R; Richter, S G; Bousquet, G G; Carr, K L

    1998-01-01

    Microbiological data have been collected on the performance of a new method of isolating and disinfecting the intraluminal path at the connect/disconnect site of a peritoneal dialysis (PD)-exchange pathway. High-temperature moist-heat (HTMH) disinfection is accomplished by a new device that uses microwave energy to heat the solution contained in the pressure-tight inner lumen of PD connector pairs between the transfer-set connector-clamp and the bag-connector break-away seal. An 85 degrees C (S.D. = 2.4 degrees C, n = 10) rise in solution temperature is seen in 12 seconds, thus yielding temperatures under pressure well over 100 degrees C with starting temperatures of 25 degrees C. Connector pairs were prepared by inoculation of a solution suspension containing at least 10(6) colony-forming units (CFU) of a test micro-organism. Approximately 0.4 mL of solution was contained within the mated connector pair. Using standard D-value determination methods, data were obtained for surviving organisms versus five exposure times and a positive control to obtain a population reduction curve. Four micro-organisms (S. epidermidis, P. aeruginosa, C. albicans, and A. niger) recognized to be among the most prevalent or problematic in causing peritonitis were tested. After microwave heating, the treated solution was aseptically withdrawn from the connector pair using a needle and syringe, plated in growth media, and incubated. Population counts of CFUs after incubation were used to establish survival curves. Results showed a tenfold population reduction in less than 3 seconds for all organisms tested. A 30-second cycle time safely achieves a > 10(8) population-reduction for bacteria and yeast organisms, and a > 10(7) population reduction for fungi. One potential benefit of using this new intraluminal disinfection method is that it may help reduce peritonitis resulting from the even more problematic pathogens such as the gram-negative bacteria and fungal organisms. PMID:10649714

  20. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  1. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  2. Induced Probabilities.

    ERIC Educational Resources Information Center

    Neel, John H.

    Induced probabilities have been largely ignored by educational researchers. Simply stated, if a new or random variable is defined in terms of a first random variable, then induced probability is the probability or density of the new random variable that can be found by summation or integration over the appropriate domains of the original random…

  3. Methods for Estimating Kidney Disease Stage Transition Probabilities Using Electronic Medical Records

    PubMed Central

    Luo, Lola; Small, Dylan; Stewart, Walter F.; Roy, Jason A.

    2013-01-01

    Chronic diseases are often described by stages of severity. Clinical decisions about what to do are influenced by the stage, whether a patient is progressing, and the rate of progression. For chronic kidney disease (CKD), relatively little is known about the transition rates between stages. To address this, we used electronic health records (EHR) data on a large primary care population, which should have the advantage of having both sufficient follow-up time and sample size to reliably estimate transition rates for CKD. However, EHR data have some features that threaten the validity of any analysis. In particular, the timing and frequency of laboratory values and clinical measurements are not determined a priori by research investigators, but rather, depend on many factors, including the current health of the patient. We developed an approach for estimating CKD stage transition rates using hidden Markov models (HMMs), when the level of information and observation time vary among individuals. To estimate the HMMs in a computationally manageable way, we used a “discretization” method to transform daily data into intervals of 30 days, 90 days, or 180 days. We assessed the accuracy and computation time of this method via simulation studies. We also used simulations to study the effect of informative observation times on the estimated transition rates. Our simulation results showed good performance of the method, even when missing data are non-ignorable. We applied the methods to EHR data from over 60,000 primary care patients who have chronic kidney disease (stage 2 and above). We estimated transition rates between six underlying disease states. The results were similar for men and women. PMID:25848580

  4. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  5. Most Probable Number Rapid Viability PCR Method to Detect Viable Spores of Bacillus anthracis in Swab Samples

    SciTech Connect

    Letant, S E; Kane, S R; Murphy, G A; Alfaro, T M; Hodges, L; Rose, L; Raber, E

    2008-05-30

    This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence of dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.

  6. Bootstrapping & Separable Monte Carlo Simulation Methods Tailored for Efficient Assessment of Probability of Failure of Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Jehan, Musarrat

    The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.

  7. A double-index method to classify Kuroshio intrusion paths in the Luzon Strait

    NASA Astrophysics Data System (ADS)

    Huang, Zhida; Liu, Hailong; Hu, Jianyu; Lin, Pengfei

    2016-06-01

    A double index (DI), which is made up of two sub-indices, is proposed to describe the spatial patterns of the Kuroshio intrusion and mesoscale eddies west to the Luzon Strait, based on satellite altimeter data. The area-integrated negative and positive geostrophic vorticities are defined as the Kuroshio warm eddy index (KWI) and the Kuroshio cold eddy index (KCI), respectively. Three typical spatial patterns are identified by the DI: the Kuroshio warm eddy path (KWEP), the Kuroshio cold eddy path (KCEP), and the leaking path. The primary features of the DI and three patterns are further investigated and compared with previous indices. The effects of the integrated area and the algorithm of the integration are investigated in detail. In general, the DI can overcome the problem of previously used indices in which the positive and negative geostrophic vorticities cancel each other out. Thus, the proportions of missing and misjudged events are greatly reduced using the DI. The DI, as compared with previously used indices, can better distinguish the paths of the Kuroshio intrusion and can be used for further research.

  8. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-01

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method. PMID:27608986

  9. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-01

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  10. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-01

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  11. Estimation of the detection probability for Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) with a passive acoustic method.

    PubMed

    Akamatsu, T; Wang, D; Wang, K; Li, S; Dong, S; Zhao, X; Barlow, J; Stewart, B S; Richlen, M

    2008-06-01

    Yangtze finless porpoises were surveyed by using simultaneous visual and acoustical methods from 6 November to 13 December 2006. Two research vessels towed stereo acoustic data loggers, which were used to store the intensity and sound source direction of the high frequency sonar signals produced by finless porpoises at detection ranges up to 300 m on each side of the vessel. Simple stereo beam forming allowed the separation of distinct biosonar sound source, which enabled us to count the number of vocalizing porpoises. Acoustically, 204 porpoises were detected from one vessel and 199 from the other vessel in the same section of the Yangtze River. Visually, 163 and 162 porpoises were detected from two vessels within 300 m of the vessel track. The calculated detection probability using acoustic method was approximately twice that for visual detection for each vessel. The difference in detection probabilities between the two methods was caused by the large number of single individuals that were missed by visual observers. However, the sizes of large groups were underestimated by using the acoustic methods. Acoustic and visual observations complemented each other in the accurate detection of porpoises. The use of simple, relatively inexpensive acoustic monitoring systems should enhance population surveys of free-ranging, echolocating odontocetes.

  12. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  13. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  14. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    SciTech Connect

    Schmidt, Matthew; Constable, Steve; Ing, Christopher; Roy, Pierre-Nicholas

    2014-06-21

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen for cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.

  15. Comparison of Colilert-18 with miniaturised most probable number method for monitoring of Escherichia coli in bathing water.

    PubMed

    Tiwari, Ananda; Niemelä, Seppo I; Vepsäläinen, Asko; Rapala, Jarkko; Kalso, Seija; Pitkänen, Tarja

    2016-02-01

    The purpose of this equivalence study was to compare an alternative method, Colilert-18 Quanti-Tray (ISO 9308-2) with the European bathing water directive (2006/7/EC) reference method, the miniaturised most probable number (MMPN) method (ISO 9308-3), for the analysis of Escherichia coli. Six laboratories analysed a total of 263 bathing water samples in Finland. The comparison was carried out according to ISO 17994:2004. The recovery of E. coli using the Colilert-18 method was 7.0% and 8.6% lower than that of the MMPN method after 48 hours and 72 hours of incubation, respectively. The confirmation rate of presumptive E. coli-positive wells in the Colilert-18 and MMPN methods was high (97.8% and 98.0%, respectively). However, the testing of presumptive E. coli-negative but coliform bacteria-positive (yellow but not fluorescent) Colilert-18 wells revealed 7.3% false negative results. There were more false negatives in the naturally contaminated waters than in the samples spiked with waste water. The difference between the recovery of Colilert-18 and the MMPN method was considered not significant, and subsequently the methods are considered as equivalent for bathing water quality monitoring in Finland. Future bathing water method equivalence verification studies may use the data reported herein. The laboratories should make sure that any wells showing even minor fluorescence will be determined as positive for E. coli.

  16. A Bayesian-probability-based method for assigning protein backbone dihedral angles based on chemical shifts and local sequences.

    PubMed

    Wang, Jun; Liu, Haiyan

    2007-01-01

    Chemical shifts contain substantial information about protein local conformations. We present a method to assign individual protein backbone dihedral angles into specific regions on the Ramachandran map based on the amino acid sequences and the chemical shifts of backbone atoms of tripeptide segments. The method uses a scoring function derived from the Bayesian probability for the central residue of a query tripeptide segment to have a particular conformation. The Ramachandran map is partitioned into representative regions at two levels of resolution. The lower resolution partitioning is equivalent to the conventional definitions of different secondary structure regions on the map. At the higher resolution level, the alpha and beta regions are further divided into subregions. Predictions are attempted at both levels of resolution. We compared our method with TALOS using the original TALOS database, and obtained comparable results. Although TALOS may produce the best results with currently available databases which are much enlarged, the Bayesian-probability-based approach can provide a quantitative measure for the reliability of predictions.

  17. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    NASA Technical Reports Server (NTRS)

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  18. Methods for integrating moderation and mediation: a general analytical framework using moderated path analysis.

    PubMed

    Edwards, Jeffrey R; Lambert, Lisa Schurer

    2007-03-01

    Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated and the mediated effects under investigation. This article presents a general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis. This framework clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models. The authors empirically illustrate this framework and give step-by-step instructions for estimation and interpretation. They summarize the advantages of their framework over current approaches, explain how it subsumes moderated mediation and mediated moderation, and describe how it can accommodate additional moderator and mediator variables, curvilinear relationships, and structural equation models with latent variables.

  19. FicTrac: a visual method for tracking spherical motion and generating fictive animal paths.

    PubMed

    Moore, Richard J D; Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas; van Swinderen, Bruno; Srinivasan, Mandyam V

    2014-03-30

    Studying how animals interface with a virtual reality can further our understanding of how attention, learning and memory, sensory processing, and navigation are handled by the brain, at both the neurophysiological and behavioural levels. To this end, we have developed a novel vision-based tracking system, FicTrac (Fictive path Tracking software), for estimating the path an animal makes whilst rotating an air-supported sphere using only input from a standard camera and computer vision techniques. We have found that the accuracy and robustness of FicTrac outperforms a low-cost implementation of a standard optical mouse-based approach for generating fictive paths. FicTrac is simple to implement for a wide variety of experimental configurations and, importantly, is fast to execute, enabling real-time sensory feedback for behaving animals. We have used FicTrac to record the behaviour of tethered honeybees, Apis mellifera, whilst presenting visual stimuli in both open-loop and closed-loop experimental paradigms. We found that FicTrac could accurately register the fictive paths of bees as they walked towards bright green vertical bars presented on an LED arena. Using FicTrac, we have demonstrated closed-loop visual fixation in both the honeybee and the fruit fly, Drosophila melanogaster, establishing the flexibility of this system. FicTrac provides the experimenter with a simple yet adaptable system that can be combined with electrophysiological recording techniques to study the neural mechanisms of behaviour in a variety of organisms, including walking vertebrates. PMID:24491637

  20. Follow-up: Prospective compound design using the 'SAR Matrix' method and matrix-derived conditional probabilities of activity.

    PubMed

    Gupta-Ostermann, Disha; Hirose, Yoichiro; Odagami, Takenao; Kouji, Hiroyuki; Bajorath, Jürgen

    2015-01-01

    In a previous Method Article, we have presented the 'Structure-Activity Relationship (SAR) Matrix' (SARM) approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a "chemical space envelope" around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach. PMID:25949808

  1. Approximate Shortest Path Queries Using Voronoi Duals

    NASA Astrophysics Data System (ADS)

    Honiden, Shinichi; Houle, Michael E.; Sommer, Christian; Wolff, Martin

    We propose an approximation method to answer point-to-point shortest path queries in undirected edge-weighted graphs, based on random sampling and Voronoi duals. We compute a simplification of the graph by selecting nodes independently at random with probability p. Edges are generated as the Voronoi dual of the original graph, using the selected nodes as Voronoi sites. This overlay graph allows for fast computation of approximate shortest paths for general, undirected graphs. The time-quality tradeoff decision can be made at query time. We provide bounds on the approximation ratio of the path lengths as well as experimental results. The theoretical worst-case approximation ratio is bounded by a logarithmic factor. Experiments show that our approximation method based on Voronoi duals has extremely fast preprocessing time and efficiently computes reasonably short paths.

  2. Mean-free-paths in concert and chamber music halls and the correct method for calibrating dodecahedral sound sources.

    PubMed

    Beranek, Leo L; Nishihara, Noriko

    2014-01-01

    The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method. PMID:24437762

  3. Mean-free-paths in concert and chamber music halls and the correct method for calibrating dodecahedral sound sources.

    PubMed

    Beranek, Leo L; Nishihara, Noriko

    2014-01-01

    The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method.

  4. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  5. The joint probability distributions of structure-factor doublets in displacive incommensurately modulated structures and their applicability to direct methods.

    PubMed

    Peschar, R; Israël, R; Beurskens, P T

    2001-07-01

    In 1993, alternative normalized structure factors for incommensurately modulated structures were defined [Lam, Beurskens & van Smaalen (1993). Acta Cryst. A49, 709-721]. The probability distribution associated with the structure invariants E(-H)E(H')E(H - H') has approximately the same functional form as the Cochran distribution. It was shown, however, that triplet-phase relations are relatively less reliable when satellites are involved [de Gelder, Israël, Lam, Beurskens, van Smaalen, Fu & Fan (1996). Acta Cryst. A52, 947-954]. In the present paper, an alternative approach is presented: instead of studying the distribution of a three-phase invariant, the probability distribution of the phase sum of two first-order satellite reflections (h,k,l,1 and h',k',l',-1) has been derived under the assumption that the phase of the associated main reflection (h + h',k + k',l + l',0) can be calculated from the known main (or averaged) structure. Intensive tests with randomly generated artificial structures and one real structure show a significant improvement of direct-methods phase-sum statistics. Functional similarities with conventional direct methods, employing normalized structure factors and the Cochran distribution, are discussed.

  6. GTNEUT: A code for the calculation of neutral particle transport in plasmas based on the Transmission and Escape Probability method

    NASA Astrophysics Data System (ADS)

    Mandrekas, John

    2004-08-01

    GTNEUT is a two-dimensional code for the calculation of the transport of neutral particles in fusion plasmas. It is based on the Transmission and Escape Probabilities (TEP) method and can be considered a computationally efficient alternative to traditional Monte Carlo methods. The code has been benchmarked extensively against Monte Carlo and has been used to model the distribution of neutrals in fusion experiments. Program summaryTitle of program: GTNEUT Catalogue identifier: ADTX Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADTX Computer for which the program is designed and others on which it has been tested: The program was developed on a SUN Ultra 10 workstation and has been tested on other Unix workstations and PCs. Operating systems or monitors under which the program has been tested: Solaris 8, 9, HP-UX 11i, Linux Red Hat v8.0, Windows NT/2000/XP. Programming language used: Fortran 77 Memory required to execute with typical data: 6 219 388 bytes No. of bits in a word: 32 No. of processors used: 1 Has the code been vectorized or parallelized?: No No. of bytes in distributed program, including test data, etc.: 300 709 No. of lines in distributed program, including test data, etc.: 17 365 Distribution format: compressed tar gzip file Keywords: Neutral transport in plasmas, Escape probability methods Nature of physical problem: This code calculates the transport of neutral particles in thermonuclear plasmas in two-dimensional geometric configurations. Method of solution: The code is based on the Transmission and Escape Probability (TEP) methodology [1], which is part of the family of integral transport methods for neutral particles and neutrons. The resulting linear system of equations is solved by standard direct linear system solvers (sparse and non-sparse versions are included). Restrictions on the complexity of the problem: The current version of the code can

  7. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    NASA Astrophysics Data System (ADS)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  8. Free energy of conformational transition paths in biomolecules: The string method and its application to myosin VI

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin; Vanden-Eijnden, Eric

    2011-01-01

    A set of techniques developed under the umbrella of the string method is used in combination with all-atom molecular dynamics simulations to analyze the conformation change between the prepowerstroke (PPS) and rigor (R) structures of the converter domain of myosin VI. The challenges specific to the application of these techniques to such a large and complex biomolecule are addressed in detail. These challenges include (i) identifying a proper set of collective variables to apply the string method, (ii) finding a suitable initial string, (iii) obtaining converged profiles of the free energy along the transition path, (iv) validating and interpreting the free energy profiles, and (v) computing the mean first passage time of the transition. A detailed description of the PPS↔R transition in the converter domain of myosin VI is obtained, including the transition path, the free energy along the path, and the rates of interconversion. The methodology developed here is expected to be useful more generally in studies of conformational transitions in complex biomolecules. PMID:21361558

  9. Path integral molecular dynamics method based on a pair density matrix approximation: An algorithm for distinguishable and identical particle systems

    NASA Astrophysics Data System (ADS)

    Miura, Shinichi; Okazaki, Susumu

    2001-09-01

    In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.

  10. Influence of beam radii on a common-path compensation method for laser beam drifts in laser collimation systems

    NASA Astrophysics Data System (ADS)

    Zhao, Yuqiong; Feng, Qibo; Zhang, Bin; Cui, Cunxing

    2016-08-01

    The laser beam drift is a main factor that influences laser collimation measurement accuracies. In such measurements, the common-path compensation method is an efficient way to eliminate errors which are normally produced by the laser beam drift. Based on our current common-path compensation system, compensations for the laser beam drift were studied by different laser beam radii and detectors. The measurements have shown that the compensation effect for 3 mm beam radius is better than the ones of 1.5 mm and 4.0 mm beam radii. Based on this, the ratio between the 3 mm beam radius and the total area of the quadrant detector, which is 36%, has indicated the best compensation effect.

  11. Easy transition path sampling methods: flexible-length aimless shooting and permutation shooting.

    PubMed

    Mullen, Ryan Gotchy; Shea, Joan-Emma; Peters, Baron

    2015-06-01

    We present new algorithms for conducting transition path sampling (TPS). Permutation shooting rigorously preserves the total energy and momentum of the initial trajectory and is simple to implement even for rigid water molecules. Versions of aimless shooting and permutation shooting that use flexible-length trajectories have simple acceptance criteria and are more computationally efficient than fixed-length versions. Flexible-length permutation shooting and inertial likelihood maximization are used to identify the reaction coordinate for vacancy migration in a two-dimensional trigonal crystal of Lennard-Jones particles. The optimized reaction coordinate eliminates nearly all recrossing of the transition state dividing surface.

  12. Path Finder

    SciTech Connect

    Rigdon, J. Brian; Smith, Marcus Daniel; Mulder, Samuel A

    2014-01-07

    PathFinder is a graph search program, traversing a directed cyclic graph to find pathways between labeled nodes. Searches for paths through ordered sequences of labels are termed signatures. Determining the presence of signatures within one or more graphs is the primary function of Path Finder. Path Finder can work in either batch mode or interactively with an analyst. Results are limited to Path Finder whether or not a given signature is present in the graph(s).

  13. A derivation of centroid molecular dynamics and other approximate time evolution methods for path integral centroid variables

    NASA Astrophysics Data System (ADS)

    Jang, Seogjoo; Voth, Gregory A.

    1999-08-01

    Several methods to approximately evolve path integral centroid variables in real time are presented in this paper, the first of which, the centroid molecular dynamics (CMD) method, is recast into the new formalism of the preceding paper and thereby derived. The approximations involved in the CMD method are thus fully characterized by mathematical derivations. Additional new approaches are also presented: centroid Hamiltonian dynamics (CHD), linearized quantum dynamics (LQD), and a perturbative correction of the LQD method (PT-LQD). The CHD method is shown to be a variation of the CMD method which conserves the approximate time dependent centroid Hamiltonian. The LQD method amounts to a linear approximation for the quantum Liouville equation, while the PT-LQD method includes a perturbative correction to the LQD method. All of these approaches are then tested for the equilibrium position correlation functions of three different one-dimensional nondissipative model systems, and it is shown that certain quantum effects are accounted for by all of them, while others, such as the long time coherence characteristic of low-dimensional nondissipative systems, are not. The CMD method is found to be consistently better than the LQD method, while the PT-LQD method improves the latter and is better than the CMD method in most cases. The CHD method gives results complementary to those of the CMD method.

  14. Methods in probability and statistical inference. Final report, June 15, 1975-June 30, 1979. [Dept. of Statistics, Univ. of Chicago

    SciTech Connect

    Wallace, D L; Perlman, M D

    1980-06-01

    This report describes the research activities of the Department of Statistics, University of Chicago, during the period June 15, 1975 to July 30, 1979. Nine research projects are briefly described on the following subjects: statistical computing and approximation techniques in statistics; numerical computation of first passage distributions; probabilities of large deviations; combining independent tests of significance; small-sample efficiencies of tests and estimates; improved procedures for simultaneous estimation and testing of many correlations; statistical computing and improved regression methods; comparison of several populations; and unbiasedness in multivariate statistics. A description of the statistical consultation activities of the Department that are of interest to DOE, in particular, the scientific interactions between the Department and the scientists at Argonne National Laboratories, is given. A list of publications issued during the term of the contract is included.

  15. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  16. Location and release time identification of pollution point source in river networks based on the Backward Probability Method.

    PubMed

    Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal

    2016-09-15

    The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes.

  17. Location and release time identification of pollution point source in river networks based on the Backward Probability Method.

    PubMed

    Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal

    2016-09-15

    The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. PMID:27219462

  18. PON-mt-tRNA: a multifactorial probability-based method for classification of mitochondrial tRNA variations

    PubMed Central

    Niroula, Abhishek; Vihinen, Mauno

    2016-01-01

    Transfer RNAs (tRNAs) are essential for encoding the transcribed genetic information from DNA into proteins. Variations in the human tRNAs are involved in diverse clinical phenotypes. Interestingly, all pathogenic variations in tRNAs are located in mitochondrial tRNAs (mt-tRNAs). Therefore, it is crucial to identify pathogenic variations in mt-tRNAs for disease diagnosis and proper treatment. We collected mt-tRNA variations using a classification based on evidence from several sources and used the data to develop a multifactorial probability-based prediction method, PON-mt-tRNA, for classification of mt-tRNA single nucleotide substitutions. We integrated a machine learning-based predictor and an evidence-based likelihood ratio for pathogenicity using evidence of segregation, biochemistry and histochemistry to predict the posterior probability of pathogenicity of variants. The accuracy and Matthews correlation coefficient (MCC) of PON-mt-tRNA are 1.00 and 0.99, respectively. In the absence of evidence from segregation, biochemistry and histochemistry, PON-mt-tRNA classifies variations based on the machine learning method with an accuracy and MCC of 0.69 and 0.39, respectively. We classified all possible single nucleotide substitutions in all human mt-tRNAs using PON-mt-tRNA. The variations in the loops are more often tolerated compared to the variations in stems. The anticodon loop contains comparatively more predicted pathogenic variations than the other loops. PON-mt-tRNA is available at http://structure.bmc.lu.se/PON-mt-tRNA/. PMID:26843426

  19. Beam splitter and method for generating equal optical path length beams

    DOEpatents

    Qian, Shinan; Takacs, Peter

    2003-08-26

    The present invention is a beam splitter for splitting an incident beam into first and second beams so that the first and second beams have a fixed separation and are parallel upon exiting. The beam splitter includes a first prism, a second prism, and a film located between the prisms. The first prism is defined by a first thickness and a first perimeter which has a first major base. The second prism is defined by a second thickness and a second perimeter which has a second major base. The film is located between the first major base and the second major base for splitting the incident beam into the first and second beams. The first and second perimeters are right angle trapezoidal shaped. The beam splitter is configured for generating equal optical path length beams.

  20. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  1. Toward Determining ATPase Mechanism in ABC Transporters: Development of the Reaction Path-Force Matching QM/MM Method.

    PubMed

    Zhou, Y; Ojeda-May, P; Nagaraju, M; Pu, J

    2016-01-01

    Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path-force matching (RP-FM) has been developed. In RP-FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP-FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639

  2. A multiscale finite element model validation method of composite cable-stayed bridge based on Probability Box theory

    NASA Astrophysics Data System (ADS)

    Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan

    2016-05-01

    Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.

  3. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  4. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-08-04

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  5. Hamilton-Jacobi equation for the least-action/least-time dynamical path based on fast marching method.

    PubMed

    Dey, Bijoy K; Janicki, Marek R; Ayers, Paul W

    2004-10-01

    Classical dynamics can be described with Newton's equation of motion or, totally equivalently, using the Hamilton-Jacobi equation. Here, the possibility of using the Hamilton-Jacobi equation to describe chemical reaction dynamics is explored. This requires an efficient computational approach for constructing the physically and chemically relevant solutions to the Hamilton-Jacobi equation; here we solve Hamilton-Jacobi equations on a Cartesian grid using Sethian's fast marching method. Using this method, we can--starting from an arbitrary initial conformation--find reaction paths that minimize the action or the time. The method is demonstrated by computing the mechanism for two different systems: a model system with four different stationary configurations and the H+H(2)-->H(2)+H reaction. Least-time paths (termed brachistochrones in classical mechanics) seem to be a suitable chioce for the reaction coordinate, allowing one to determine the key intermediates and final product of a chemical reaction. For conservative systems the Hamilton-Jacobi equation does not depend on the time, so this approach may be useful for simulating systems where important motions occur on a variety of different time scales.

  6. A sequential method for passive detection, characterization, and localization of multiple low probability of intercept LFMCW signals

    NASA Astrophysics Data System (ADS)

    Hamschin, Brandon M.

    A method for passive Detection, Characterization, and Localization (DCL) of multiple low power, Linear Frequency Modulated Continuous Wave (LFMCW) (i.e., Low Probability of Intercept (LPI)) signals is proposed. We demonstrate, via simulation, laboratory, and outdoor experiments, that the method is able to detect and correctly characterize the parameters that define two simultaneous LFMCW signals with probability greater than 90% when the signal to noise ratio is -10 dB or greater. While this performance is compelling, it is far from the Cramer-Rao Lower Bound (CRLB), which we derive, and the performance of the Maximum Likelihood Estimator (MLE), whose performance we simulate. The loss in performance relative to the CRLB and the MLE is the price paid for computational tractability. The LFMCW signal is the focus of this work because of its common use in modern, low-cost radar systems. In contrast to other detection and characterization approaches, such as the MLE and those based on the Wigner-Ville Transform (WVT) or the Wigner-Ville Hough Transform (WVHT), our approach does not begin with a parametric model of the received signal that is specified directly in terms of its LFMCW constituents. Rather, we analyze the signal over time intervals that are short, non-overlapping, and contiguous by modeling it within these intervals as a sum of a small number sinusoidal (i.e., harmonic) components with unknown frequencies, deterministic but unknown amplitudes, unknown order (i.e., number of harmonic components), and unknown noise autocorrelation function. It is this model of the data that makes the solution computationally feasible, but also what leads to a degradation in performance since estimates are not based on the full time series. By modeling the signal in this way, we reliably detect the presence of multiple LFMCW signals in colored noise without the need for prewhitening, efficiently estimate (i.e. , characterize) their parameters, provide estimation error

  7. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    SciTech Connect

    Liu Di

    2008-10-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.

  8. Computing the optimal path in stochastic dynamical systems.

    PubMed

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  9. Computing the optimal path in stochastic dynamical systems

    NASA Astrophysics Data System (ADS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  10. Computing the optimal path in stochastic dynamical systems.

    PubMed

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces. PMID:27586597

  11. Effect-based interpretation of toxicity test data using probability and comparison with alternative methods of analysis

    SciTech Connect

    Gully, J.R.; Baird, R.B.; Markle, P.J.; Bottomley, J.P.

    2000-01-01

    A methodology is described that incorporates the intra- and intertest variability and the biological effect of bioassay data in evaluating the toxicity of single and multiple tests for regulatory decision-making purposes. The single- and multiple-test regulatory decision probabilities were determined from t values (n {minus} 1, one-tailed) derived from the estimated biological effect and the associated standard error at the critical sample concentration. Single-test regulatory decision probabilities below the selected minimum regulatory decision probability identify individual tests as noncompliant. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability below the multiple-test regulatory decision minimum identifies groups of tests in which the magnitude and persistence of the toxicity is sufficient to be considered noncompliant or to require enforcement action. Regulatory decision probabilities derived from the t distribution were compared with results based on standard and bioequivalence hypothesis tests using single- and multiple-concentration toxicity test data from an actual national pollutant discharge incorporated the precision of the effect estimate into regulatory decisions at a fixed level of effect. Also, probability-based interpretation of toxicity tests provides incentive to laboratories to produce, and permit holders to use, high-quality, precise data, particularly when multiple tests are used in regulatory decisions. These results are contrasted with standard and bioequivalence hypothesis tests in which the intratest precision is a determining factor in setting the biological effect used for regulatory decisions.

  12. Investigating groundwater flow paths within proglacial moraine using multiple geophysical methods

    NASA Astrophysics Data System (ADS)

    McClymont, Alastair F.; Roy, James W.; Hayashi, Masaki; Bentley, Laurence R.; Maurer, Hansruedi; Langston, Greg

    2011-03-01

    SummaryGroundwater that is stored and slowly released from alpine watersheds plays an important role in sustaining mountain rivers. Yet, little is known about how groundwater flows within typical alpine geological deposits like glacial moraine, talus, and bedrock. Within the Lake O'Hara alpine watershed of the Canadian Rockies, seasonal snowmelt and rain infiltrates into a large complex of glacial moraine and talus deposits before discharging from a series of springs within a relatively confined area of a terminal moraine deposit. In order to understand the shallow subsurface processes that govern how groundwater is routed through this area, we have undertaken a geophysical study on glacial moraine and bedrock over and around the springs. From interpretations of several seismic refraction, ground-penetrating radar (GPR), and electrical resistivity tomography (ERT) profiles, we delineate the topography of bedrock beneath moraine. Although the bedrock is generally flat under central parts of the terminal moraine, we suggest that an exposed slope of bedrock on its eastern side and a ridge of shallow bedrock imaged by ERT data underneath its western margin serves to channel deep groundwater toward the largest spring. Low-electrical-resistivity anomalies identified on ERT images within shallow parts of the moraine indicate the presence of groundwater flowing over shallow bedrock and/or ice. From coincident seismic refraction, GPR and ERT profiles, we interpret an ca. 5-m-thick deep layer of saturated moraine and fractured bedrock. Despite their relatively small storage volumes, we suggest that groundwater flowing through bedrock cracks may provide an important contribution to stream runoff during low-flow periods. The distinct deep and shallow groundwater flow paths that we interpret from geophysical data reconcile with interpretations from previous analyses of hydrograph and water chemistry data from this same area.

  13. Simulating multiple diffraction in imaging systems using a path integration method.

    PubMed

    Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Jörg; Urbach, Paul

    2016-05-10

    We present a method for simulating multiple diffraction in imaging systems based on the Huygens-Fresnel principle. The method accounts for the effects of both aberrations and diffraction and is entirely performed using Monte Carlo ray tracing. We compare the results of this method to those of reference simulations for field propagation through optical systems and for the calculation of point spread functions. The method can accurately model a wide variety of optical systems beyond the exit pupil approximation. PMID:27168302

  14. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  15. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-04-09

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes.

  16. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method

    PubMed Central

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  17. Evaluating open-path FTIR spectrometer data using different quantification methods, libraries, and background spectra obtained under varying environmental conditions

    SciTech Connect

    Tomasko, M.S.

    1995-12-31

    Studies were performed to evaluate the accuracy of open-path Fourier Transform Infrared (OP-FTIR) spectrometers using a 35 foot outdoor exposure chamber in Pittsboro, North Carolina. Results obtained with the OP-FTIR spectrometer were compared to results obtained with a reference method (a gas chromatograph equipped with a flame ionization detector, GC-FID). Concentration results were evaluated in terms of the mathematical methods and spectral libraries used for quantification. In addition, the research investigated the effect on quantification of using different backgrounds obtained at various times during the day. The chemicals used in this study were toluene, cyclohexane, and methanol; and these were evaluated over the concentration range of 5-30 ppm.

  18. Exchange and spin states in quantum dots under strong spatial correlations. Computer simulation by the Feynman path integral method

    SciTech Connect

    Shevkunov, S. V.

    2013-10-15

    The fundamental laws in the behavior of electrons in model quantum dots that are caused by exchange and strong Coulomb correlations are studied. The ab initio path integral method is used to numerically simulate systems of two, three, four, and six interacting identical electrons confined in a three-dimensional spherical potential well with a parabolic confining potential against the background of thermal fluctuations. The temperature dependences of spin and collective spin magnetic susceptibility are calculated for model quantum dots of various spatial sizes. A basically exact procedure is proposed for taking into account the permutation symmetry and the spin state of electrons, which makes it possible to perform numerical calculations using modern computer facilities. The conditions of applicability of a virial energy estimator and its optimum form in exchange systems are determined. A correlation estimator of kinetic energy, which is an alternative to a basic estimator, is suggested. A fundamental relation between the kinetic energy of a quantum particle and the character of its virtual diffusion in imaginary time is demonstrated. The process of natural 'pairing' of electron spins during the compression of a quantum dot and cooling of a system is numerically reproduced in terms of path integrals. The temperature dependences of the spin magnetic susceptibility of electron pairs with a characteristic maximum caused by spin pairing are obtained.

  19. Evaluation of a micrometeorological mass balance method employing an open-path laser for measuring methane emissions

    NASA Astrophysics Data System (ADS)

    Desjardins, R. L.; Denmead, O. T.; Harper, L.; McBain, M.; Massé, D.; Kaharabata, S.

    In trials of a mass balance method for measuring methane (CH 4) emissions, sonic anemometers and an open-path laser were used to measure the transport of CH 4 released from a ground-level source across a downwind face 50 m long and 6 m high. Release rates matched emissions expected from dairy herds of 2 to 40 cows. The long laser path permitted inferences from measurements in only two planes, one upwind and one downwind, while the fast-response instruments allowed calculation of instantaneous horizontal fluxes rather than fluxes calculated from mean wind speeds and mean concentrations. The detection limit of the lasers was 0.02 ppmv, with the separation between the transmitters and reflectors being about 50 m. The main conclusions from the 23 trials were: (1) Emissions calculated from mean wind speeds and concentrations overestimated the true emissions calculated from instantaneous measurements by 5%. (2) Because of small changes in methane concentration, the minimum sample size in animal trials would be 10 dairy cows, producing about 40 mg CH 4 s -1. (3) For release rates greater than 40 mg CH 4 s -1 and with sufficient replication, the technique could detect a change in production rate of 9% ( P<=0.05). (4) Attention to perceived weaknesses in the present technique should help towards detecting changes of 5%.

  20. A path-independent method for barrier option pricing in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Rashidi Ranjbar, Hedieh; Seifi, Abbas

    2015-12-01

    This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.

  1. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  2. Probability of satellite collision

    NASA Technical Reports Server (NTRS)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  3. Noncontact common-path Fourier domain optical coherence tomography method for in vitro intraocular lens power measurement

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Kang, Jin U.; Calogero, Don; James, Robert H.; Ilev, Ilko K.

    2011-12-01

    We propose a novel common-path Fourier domain optical coherence tomography (CP-FD-OCT) method for noncontact, accurate, and objective in vitro measurement of the dioptric power of intraocular lenses (IOLs) implants. The CP-FD-OCT method principle of operation is based on simple two-dimensional scanning common-path Fourier domain optical coherence tomography. By reconstructing the anterior and posterior IOL surfaces, the radii of the two surfaces, and thus the IOL dioptric power are determined. The CP-FD-OCT design provides high accuracy of IOL surface reconstruction. The axial position detection accuracy is calibrated at 1.22 μm in balanced saline solution used for simulation of in situ conditions. The lateral sampling rate is controlled by the step size of linear scanning systems. IOL samples with labeled dioptric power in the low-power (5D), mid-power (20D and 22D), and high-power (36D) ranges under in situ conditions are tested. We obtained a mean power of 4.95/20.11/22.09/36.25 D with high levels of repeatability estimated by a standard deviation of 0.10/0.18/0.2/0.58 D and a relative error of 2/0.9/0.9/1.6%, based on five measurements for each IOL respectively. The new CP-FD-OCT method provides an independent source of IOL power measurement data as well as information for evaluating other optical properties of IOLs such as refractive index, central thickness, and aberrations.

  4. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiancheng; Hu, Hao; Hu, Xiangqian; Yang, Weitao

    2009-04-01

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids "on-the-fly" QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  5. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    SciTech Connect

    Zeng Xiancheng; Hu Hao; Hu Xiangqian; Yang Weitao

    2009-04-28

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids 'on-the-fly' QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  6. Fractional Levy motion through path integrals

    SciTech Connect

    Calvo, Ivan; Sanchez, Raul; Carreras, Benjamin A

    2009-01-01

    Fractional Levy motion (fLm) is the natural generalization of fractional Brownian motion in the context of self-similar stochastic processes and stable probability distributions. In this paper we give an explicit derivation of the propagator of fLm by using path integral methods. The propagators of Brownian motion and fractional Brownian motion are recovered as particular cases. The fractional diffusion equation corresponding to fLm is also obtained.

  7. Range maximization method for ramjet powered missiles with flight path constraints

    NASA Astrophysics Data System (ADS)

    Schoettle, U. M.

    1982-03-01

    Mission performance of ramjet powered missiles is strongly infuenced by the trajectory flown. The trajectory optimization problem considered is to obtain the control time histories (i.e., propellant flow rate and angle of attack) which maximize the range of ramjet powered supersonic missiles with preset initial and terminal fight conditions and operational constraints. The approach chosen employs a parametric control model to represent the infinite-dimensional controls by a finite set of parameters. The resulting suboptimal parameter optimization problem is solved by means of nonlinear programming methods. Operational constraints on the state variables are treated by the method of penalty functions. The presented method and numerical results refer to a fixed geometry solid fuel integral rocket ramjet missile for air-to-surface or surface-to-surface missions. The numerical results demonstrate that continuous throttle capabilities increase range performance by about 5 to 11 percent when compared to more conventional throttle control.

  8. A Shortest-Path-Based Method for the Analysis and Prediction of Fruit-Related Genes in Arabidopsis thaliana

    PubMed Central

    Su, Fangchu; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2016-01-01

    Biologically, fruits are defined as seed-bearing reproductive structures in angiosperms that develop from the ovary. The fertilization, development and maturation of fruits are crucial for plant reproduction and are precisely regulated by intrinsic genetic regulatory factors. In this study, we used Arabidopsis thaliana as a model organism and attempted to identify novel genes related to fruit-associated biological processes. Specifically, using validated genes, we applied a shortest-path-based method to identify several novel genes in a large network constructed using the protein-protein interactions observed in Arabidopsis thaliana. The described analyses indicate that several of the discovered genes are associated with fruit fertilization, development and maturation in Arabidopsis thaliana. PMID:27434024

  9. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms.

    PubMed

    Leveque, Randall J

    2011-07-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be "well balanced" and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments.

  10. RSS-Based Method for Sensor Localization with Unknown Transmit Power and Uncertainty in Path Loss Exponent

    PubMed Central

    Huang, Jiyan; Liu, Peng; Lin, Wei; Gui, Guan

    2016-01-01

    The localization of a sensor in wireless sensor networks (WSNs) has now gained considerable attention. Since the transmit power and path loss exponent (PLE) are two critical parameters in the received signal strength (RSS) localization technique, many RSS-based location methods, considering the case that both the transmit power and PLE are unknown, have been proposed in the literature. However, these methods require a search process, and cannot give a closed-form solution to sensor localization. In this paper, a novel RSS localization method with a closed-form solution based on a two-step weighted least squares estimator is proposed for the case with the unknown transmit power and uncertainty in PLE. Furthermore, the complete performance analysis of the proposed method is given in the paper. Both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The relationships between the deterministic CRLB and the proposed stochastic CRLB are presented. The paper also proves that the proposed method can reach the stochastic CRLB. PMID:27618055

  11. RSS-Based Method for Sensor Localization with Unknown Transmit Power and Uncertainty in Path Loss Exponent.

    PubMed

    Huang, Jiyan; Liu, Peng; Lin, Wei; Gui, Guan

    2016-01-01

    The localization of a sensor in wireless sensor networks (WSNs) has now gained considerable attention. Since the transmit power and path loss exponent (PLE) are two critical parameters in the received signal strength (RSS) localization technique, many RSS-based location methods, considering the case that both the transmit power and PLE are unknown, have been proposed in the literature. However, these methods require a search process, and cannot give a closed-form solution to sensor localization. In this paper, a novel RSS localization method with a closed-form solution based on a two-step weighted least squares estimator is proposed for the case with the unknown transmit power and uncertainty in PLE. Furthermore, the complete performance analysis of the proposed method is given in the paper. Both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The relationships between the deterministic CRLB and the proposed stochastic CRLB are presented. The paper also proves that the proposed method can reach the stochastic CRLB. PMID:27618055

  12. PathExpress update: the enzyme neighbourhood method of associating gene-expression data with metabolic pathways.

    PubMed

    Goffard, Nicolas; Frickey, Tancred; Weiller, Georg

    2009-07-01

    The post-genomic era presents us with the challenge of linking the vast amount of raw data obtained with transcriptomic and proteomic techniques to relevant biological pathways. We present an update of PathExpress, a web-based tool to interpret gene-expression data and explore the metabolic network without being restricted to predefined pathways. We define the Enzyme Neighbourhood (EN) as a sub-network of linked enzymes with a limited path length to identify the most relevant sub-networks affected in gene-expression experiments. PathExpress is freely available at: http://bioinfoserver.rsbs.anu.edu.au/utils/PathExpress/.

  13. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  14. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  15. Statistical methods to quantify the effect of mite parasitism on the probability of death in honey bee colonies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Varroa destructor is a mite parasite of European honey bees, Apis mellifera, that weakens the population, can lead to the death of an entire honey bee colony, and is believed to be the parasite with the most economic impact on beekeeping. The purpose of this study was to estimate the probability of ...

  16. Gravity-dependent signal path variation in a large VLBI telescope modelled with a combination of surveying methods

    NASA Astrophysics Data System (ADS)

    Sarti, Pierguido; Abbondanza, C.; Vittuari, L.

    2009-11-01

    The very long baseline interferometry (VLBI) antenna in Medicina (Italy) is a 32-m AZ-EL mount that was surveyed several times, adopting an indirect method, for the purpose of estimating the eccentricity vector between the co-located VLBI and Global Positioning System instruments. In order to fulfill this task, targets were located in different parts of the telescope’s structure. Triangulation and trilateration on the targets highlight a consistent amount of deformation that biases the estimate of the instrument’s reference point up to 1 cm, depending on the targets’ locations. Therefore, whenever the estimation of accurate local ties is needed, it is critical to take into consideration the action of gravity on the structure. Furthermore, deformations induced by gravity on VLBI telescopes may modify the length of the path travelled by the incoming radio signal to a non-negligible extent. As a consequence, differently from what it is usually assumed, the relative distance of the feed horn’s phase centre with respect to the elevation axis may vary, depending on the telescope’s pointing elevation. The Medicina telescope’s signal path variation Δ L increases by a magnitude of approximately 2 cm, as the pointing elevation changes from horizon to zenith; it is described by an elevation-dependent second-order polynomial function computed as, according to Clark and Thomsen (Techical report, 100696, NASA, Greenbelt, 1988), a linear combination of three terms: receiver displacement Δ R, primary reflector’s vertex displacement Δ V and focal length variations Δ F. Δ L was investigated with a combination of terrestrial triangulation and trilateration, laser scanning and a finite element model of the antenna. The antenna gain (or auto-focus curve) Δ G is routinely determined through astronomical observations. A surprisingly accurate reproduction of Δ G can be obtained with a combination of Δ V, Δ F and Δ R.

  17. Contribution analysis of bus pass-by noise based on dynamic transfer path method

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Zheng, Sifa; Hao, Peng; Lian, Xiaomin

    2011-10-01

    Bus pass-by noise has become one of the main noise sources which seriously disturb the mental and physical health of urban residents. The key of reducing bus noise is to identify major noise source. In this paper the dynamic transfer characteristic model in the process of bus acceleration is established, which can quantitatively describe the relationship between the sound source or vibration source of the vehicle and the response points outside the vehicle; also a test method has been designed, which can quickly and easily identify the contribution of the bus pass-by noise. Experimental results show that the dynamic transfer characteristic model can identify the main noise source and their contribution during the acceleration, which has significance for the bus noise reduction.

  18. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  19. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset.

    PubMed

    Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  20. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  1. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  2. Methods for assessing movement path recursion with application to African buffalo in South Africa

    USGS Publications Warehouse

    Bar-David, S.; Bar-David, I.; Cross, P.C.; Ryan, S.J.; Knechtel, C.U.; Getz, W.M.

    2009-01-01

    Recent developments of automated methods for monitoring animal movement, e.g., global positioning systems (GPS) technology, yield high-resolution spatiotemporal data. To gain insights into the processes creating movement patterns, we present two new techniques for extracting information from these data on repeated visits to a particular site or patch ("recursions"). Identification of such patches and quantification of recursion pathways, when combined with patch-related ecological data, should contribute to our understanding of the habitat requirements of large herbivores, of factors governing their space-use patterns, and their interactions with the ecosystem. We begin by presenting output from a simple spatial model that simulates movements of large-herbivore groups based on minimal parameters: resource availability and rates of resource recovery after a local depletion. We then present the details of our new techniques of analyses (recursion analysis and circle analysis) and apply them to data generated by our model, as well as two sets of empirical data on movements of African buffalo (Syncerus coffer): the first collected in Klaserie Private Nature Reserve and the second in Kruger National Park, South Africa. Our recursion analyses of model outputs provide us with a basis for inferring aspects of the processes governing the production of buffalo recursion patterns, particularly the potential influence of resource recovery rate. Although the focus of our simulations was a comparison of movement patterns produced by different resource recovery rates, we conclude our paper with a comprehensive discussion of how recursion analyses can be used when appropriate ecological data are available to elucidate various factors influencing movement. Inter alia, these include the various limiting and preferred resources, parasites, and topographical and landscape factors. ?? 2009 by the Ecological Society of America.

  3. Patients recording clinical encounters: a path to empowerment? Assessment by mixed methods

    PubMed Central

    Elwyn, Glyn; Barr, Paul James; Grande, Stuart W

    2015-01-01

    Objective To examine the motivations of patients recording clinical encounters, covertly or otherwise, and why some do not wish to record encounters. Design Mixed-methods analysis of survey data and nested semistructured interviews. Setting Survey to UK audience, using social media and radio broadcast. Participants 168 survey respondents, of whom 161 were 18 years of age or older (130 completions). Of the 56 participants who agreed to be contacted, we included data from 17 interviews. Results 19 (15%) respondents indicated having secretly recorded a clinical encounter and 14 (11%) were aware of someone who had secretly recorded a clinical encounter. 45 (35%) said they would consider recording secretly and 44 (34%) said they would record after asking permission. Totally, 69% of respondents indicated their desire to record clinical encounters, split equally between wanting to do so covertly or with permission. Thematic analysis of the interviews showed that most patients are motivated by the wish to replay, relisten and share the recording with others. Some are also motivated by the idea of owning a personal record, and its potential use as verification of a poor healthcare experience. The rationale for permission seeking was based on the wish to prioritise a trusting relationship with a health professional. Those who preferred to record covertly described a pre-existing lack of trust, a fear that recording would be denied, and a concern that an affronted clinician would deny them access to future care. There was a general wish that recording should be facilitated. Conclusions Patients’ prime motivation for recording is to enhance their experience of care, and to share it with others. Patients know that recording challenges the ‘ceremonial order of the clinic’, and so some decide to act covertly. Patients wanted clearer, more permissive policies to be developed. PMID:26264274

  4. Source Apportionment of the Anthropogenic Increment to Ozone, Formaldehyde, and Nitrogen Dioxide by the Path-Integral Method in a 3D Model.

    PubMed

    Dunker, Alan M; Koo, Bonyoung; Yarwood, Greg

    2015-06-01

    The anthropogenic increment of a species is the difference in concentration between a base-case simulation with all emissions included and a background simulation without the anthropogenic emissions. The Path-Integral Method (PIM) is a new technique that can determine the contributions of individual anthropogenic sources to this increment. The PIM was applied to a simulation of O3 formation in July 2030 in the U.S., using the Comprehensive Air Quality Model with Extensions and assuming advanced controls on light-duty vehicles (LDVs) and other sources. The PIM determines the source contributions by integrating first-order sensitivity coefficients over a range of emissions, a path, from the background case to the base case. There are many potential paths, with each representing a specific emission-control strategy leading to zero anthropogenic emissions, i.e., controlling all sources together versus controlling some source(s) preferentially are different paths. Three paths were considered, and the O3, formaldehyde, and NO2 anthropogenic increments were apportioned to five source categories. At rural and urban sites in the eastern U.S. and for all three paths, point sources typically have the largest contribution to the O3 and NO2 anthropogenic increments, and either LDVs or area sources, the smallest. Results for formaldehyde are more complex. PMID:25938820

  5. A bootstrapping method to assess the influence of age, obesity, gender, and gait speed on probability of tripping as a function of obstacle height.

    PubMed

    Garman, Christina Rossi; Franck, Christopher T; Nussbaum, Maury A; Madigan, Michael L

    2015-04-13

    Tripping is a common mechanism for inducing falls. The purpose of this study was to present a method that determines the probability of tripping over an unseen obstacle while avoiding the ambiguous situation wherein median minimum foot clearance (MFC) and MFC interquartile range concurrently increase or decrease, and determines how the probability of tripping varies with potential obstacle height. The method was used to investigate the effects of age, obesity, gender, and gait speed on the probability of tripping. MFC was measured while 80 participants walked along a 10-m walkway at self-selected and hurried gait speeds. The method was able to characterize the probability of tripping as a function of obstacle height, and identify effects of age, obesity, gender, and gait speed. More specifically, the probability of tripping was higher among older adults, higher among obese adults, higher among females, and higher at the slower self-selected speed. Many of these results were not found, or clear, from the more common approach on characterizing likelihood of tripping based on MFC measures of central tendency and variability.

  6. A fully automatic three-step liver segmentation method on LDA-based probability maps for multiple contrast MR images.

    PubMed

    Gloger, Oliver; Kühn, Jens; Stanski, Adam; Völzke, Henry; Puls, Ralf

    2010-07-01

    Automatic 3D liver segmentation in magnetic resonance (MR) data sets has proven to be a very challenging task in the domain of medical image analysis. There exist numerous approaches for automatic 3D liver segmentation on computer tomography data sets that have influenced the segmentation of MR images. In contrast to previous approaches to liver segmentation in MR data sets, we use all available MR channel information of different weightings and formulate liver tissue and position probabilities in a probabilistic framework. We apply multiclass linear discriminant analysis as a fast and efficient dimensionality reduction technique and generate probability maps then used for segmentation. We develop a fully automatic three-step 3D segmentation approach based upon a modified region growing approach and a further threshold technique. Finally, we incorporate characteristic prior knowledge to improve the segmentation results. This novel 3D segmentation approach is modularized and can be applied for normal and fat accumulated liver tissue properties.

  7. The lead-lag relationship between stock index and stock index futures: A thermal optimal path method

    NASA Astrophysics Data System (ADS)

    Gong, Chen-Chen; Ji, Shen-Dan; Su, Li-Ling; Li, Sai-Ping; Ren, Fei

    2016-02-01

    The study of lead-lag relationship between stock index and stock index futures is of great importance for its wide application in hedging and portfolio investments. Previous works mainly use conventional methods like Granger causality test, GARCH model and error correction model, and focus on the causality relation between the index and futures in a certain period. By using a non-parametric approach-thermal optimal path (TOP) method, we study the lead-lag relationship between China Securities Index 300 (CSI 300), Hang Seng Index (HSI), Standard and Poor 500 (S&P 500) Index and their associated futures to reveal the variance of their relationship over time. Our finding shows evidence of pronounced futures leadership for well established index futures, namely HSI and S&P 500 index futures, while index of developing market like CSI 300 has pronounced leadership. We offer an explanation based on the measure of an indicator which quantifies the differences between spot and futures prices for the surge of lead-lag function. Our results provide new perspectives for the understanding of the dynamical evolution of lead-lag relationship between stock index and stock index futures, which is valuable for the study of market efficiency and its applications.

  8. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy

    NASA Astrophysics Data System (ADS)

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems.

  9. An analysis of quantum effects on the thermodynamic properties of cryogenic hydrogen using the path integral method.

    PubMed

    Nagashima, H; Tsuda, S; Tsuboi, N; Koshi, M; Hayashi, K A; Tokumasu, T

    2014-04-01

    In this paper, we describe the analysis of the thermodynamic properties of cryogenic hydrogen using classical molecular dynamics (MD) and path integral MD (PIMD) method to understand the effects of the quantum nature of hydrogen molecules. We performed constant NVE MD simulations across a wide density-temperature region to establish an equation of state (EOS). Moreover, the quantum effect on the difference of molecular mechanism of pressure-volume-temperature relationship was addressed. The EOS was derived based on the classical mechanism idea only using the MD simulation results. Simulation results were compared with each MD method and experimental data. As a result, it was confirmed that although the EOS on the basis of classical MD cannot reproduce the experimental data of saturation property of hydrogen in the high-density region, the EOS on the basis of PIMD well reproduces those thermodynamic properties of hydrogen. Moreover, it was clarified that taking quantum effects into account makes the repulsion force larger and the potential well shallower. Because of this mechanism, the intermolecular interaction of hydrogen molecules diminishes and the virial pressure increases.

  10. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy.

    PubMed

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems. PMID:27300946

  11. Comparing laser-based open- and closed-path gas analyzers to measure methane fluxes using the eddy covariance method

    USGS Publications Warehouse

    Detto, M.; Verfaillie, J.; Anderson, F.; Xu, L.; Baldocchi, D.

    2011-01-01

    Closed- and open-path methane gas analyzers are used in eddy covariance systems to compare three potential methane emitting ecosystems in the Sacramento-San Joaquin Delta (CA, USA): a rice field, a peatland pasture and a restored wetland. The study points out similarities and differences of the systems in field experiments and data processing. The closed-path system, despite a less intrusive placement with the sonic anemometer, required more care and power. In contrast, the open-path system appears more versatile for a remote and unattended experimental site. Overall, the two systems have comparable minimum detectable limits, but synchronization between wind speed and methane data, air density corrections and spectral losses have different impacts on the computed flux covariances. For the closed-path analyzer, air density effects are less important, but the synchronization and spectral losses may represent a problem when fluxes are small or when an undersized pump is used. For the open-path analyzer air density corrections are greater, due to spectroscopy effects and the classic Webb-Pearman-Leuning correction. Comparison between the 30-min fluxes reveals good agreement in terms of magnitudes between open-path and closed-path flux systems. However, the scatter is large, as consequence of the intensive data processing which both systems require. ?? 2011.

  12. A Random Walk on a Circular Path

    ERIC Educational Resources Information Center

    Ching, W.-K.; Lee, M. S.

    2005-01-01

    This short note introduces an interesting random walk on a circular path with cards of numbers. By using high school probability theory, it is proved that under some assumptions on the number of cards, the probability that a walker will return to a fixed position will tend to one as the length of the circular path tends to infinity.

  13. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  14. Multi-Domain VLAN Path Signaling Method Having Tag Swapping Function for GMPLS Controlled Wide Area Layer-2 Network

    NASA Astrophysics Data System (ADS)

    Kikuta, Kou; Nishida, Masahiro; Ishii, Daisuke; Okamoto, Satoru; Yamanaka, Naoaki

    A multi-domain GMPLS layer-2 switch capable network with VLAN tag swapping is demonstrated for the first time. In this demonstration, we verify three new features, establishing path with designating VLAN IDs, swapping VLAN ID on prototype switch, and management of VLAN IDs per domain. Using those three features, carrier-class Ethernet backbone networks which supports path route designation in multi-domain network can be established.

  15. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2015-01-28

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P(-6)) to O(P(-12)), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  16. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2015-01-28

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P(-6)) to O(P(-12)), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities. PMID:25637967

  17. Real-space finite-difference approach for multi-body systems: path-integral renormalization group method and direct energy minimization method.

    PubMed

    Sasaki, Akira; Kojo, Masashi; Hirose, Kikuji; Goto, Hidekazu

    2011-11-01

    The path-integral renormalization group and direct energy minimization method of practical first-principles electronic structure calculations for multi-body systems within the framework of the real-space finite-difference scheme are introduced. These two methods can handle higher dimensional systems with consideration of the correlation effect. Furthermore, they can be easily extended to the multicomponent quantum systems which contain more than two kinds of quantum particles. The key to the present methods is employing linear combinations of nonorthogonal Slater determinants (SDs) as multi-body wavefunctions. As one of the noticeable results, the same accuracy as the variational Monte Carlo method is achieved with a few SDs. This enables us to study the entire ground state consisting of electrons and nuclei without the need to use the Born-Oppenheimer approximation. Recent activities on methodological developments aiming towards practical calculations such as the implementation of auxiliary field for Coulombic interaction, the treatment of the kinetic operator in imaginary-time evolutions, the time-saving double-grid technique for bare-Coulomb atomic potentials and the optimization scheme for minimizing the total-energy functional are also introduced. As test examples, the total energy of the hydrogen molecule, the atomic configuration of the methylene and the electronic structures of two-dimensional quantum dots are calculated, and the accuracy, availability and possibility of the present methods are demonstrated.

  18. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  19. Emptiness Formation Probability

    NASA Astrophysics Data System (ADS)

    Crawford, Nicholas; Ng, Stephen; Starr, Shannon

    2016-08-01

    We present rigorous upper and lower bounds on the emptiness formation probability for the ground state of a spin-1/2 Heisenberg XXZ quantum spin system. For a d-dimensional system we find a rate of decay of the order {exp(-c L^{d+1})} where L is the sidelength of the box in which we ask for the emptiness formation event to occur. In the {d=1} case this confirms previous predictions made in the integrable systems community, though our bounds do not achieve the precision predicted by Bethe ansatz calculations. On the other hand, our bounds in the case {d ≥ 2} are new. The main tools we use are reflection positivity and a rigorous path integral expansion, which is a variation on those previously introduced by Toth, Aizenman-Nachtergaele and Ueltschi.

  20. A Didactic Proposed for Teaching the Concepts of Electrons and Light in Secondary School Using Feynman's Path Sum Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Arlego, Marcelo; Otero, Maria Rita

    2012-01-01

    This work comprises an investigation about basic Quantum Mechanics (QM) teaching in the high school. The organization of the concepts does not follow a historical line. The Path Integrals method of Feynman has been adopted as a Reference Conceptual Structure that is an alternative to the canonical formalism. We have designed a didactic sequence…

  1. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  2. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  3. Prediction of slant path rain attenuation statistics at various locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1977-01-01

    The paper describes a method for predicting slant path attenuation statistics at arbitrary locations for variable frequencies and path elevation angles. The method involves the use of median reflectivity factor-height profiles measured with radar as well as the use of long-term point rain rate data and assumed or measured drop size distributions. The attenuation coefficient due to cloud liquid water in the presence of rain is also considered. Absolute probability fade distributions are compared for eight cases: Maryland (15 GHz), Texas (30 GHz), Slough, England (19 and 37 GHz), Fayetteville, North Carolina (13 and 18 GHz), and Cambridge, Massachusetts (13 and 18 GHz).

  4. Method for measurement of transition probabilities by laser-induced breakdown spectroscopy based on CSigma graphs-Application to Ca II spectral lines

    NASA Astrophysics Data System (ADS)

    Aguilera, J. A.; Aragón, C.; Manrique, J.

    2015-07-01

    We propose a method for determination of transition probabilities by laser-induced breakdown spectroscopy that avoids the error due to self-absorption. The method relies on CSigma graphs, a generalization of curves of growth which allows including several lines of various elements in the same ionization state. CSigma graphs are constructed including reference lines of an emitting species with well-known transition probabilities, together with the lines of interest, both in the same ionization state. The samples are fused glass disks prepared from small concentrations of compounds. When the method is applied, the concentration of the element of interest in the sample must be controlled to avoid the failure of the homogeneous plasma model. To test the method, the transition probabilities of 9 Ca II lines arising from the 4d, 5s, 5d and 6s configurations are measured using Fe II reference lines. The data for 5 of the studied lines, mainly from the 5d and 6s configurations, had not been measured previously.

  5. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  6. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  7. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational–rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    SciTech Connect

    Mielke, Steven L. E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2015-01-28

    We present an improved version of our “path-by-path” enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P{sup −6}) to O(P{sup −12}), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational–rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan–Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300–3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  8. The albedo effect on neutron transmission probability.

    PubMed

    Khanouchi, A; Sabir, A; Boulkheir, M; Ichaoui, R; Ghassoun, J; Jehouani, A

    1997-01-01

    The aim of this study is to evaluate the albedo effect on the neutron transmission probability through slab shields. For this reason we have considered an infinite homogeneous slab having a fixed thickness equal to 20 lambda (lambda is the mean free path of the neutron in the slab). This slab is characterized by the factor Ps (scattering probability) and contains a vacuum channel which is formed by two horizontal parts and an inclined one (David, M. C. (1962) Duc and Voids in shields. In Reactor Handbook, Vol. III, Part B, p. 166). The thickness of the vacuum channel is taken equal to 2 lambda. An infinite plane source of neutrons is placed on the first of the slab (left face) and detectors, having windows equal to 2 lambda, are placed on the second face of the slab (right face). Neutron histories are sampled by the Monte Carlo method (Booth, T. E. and Hendricks, J. S. (1994) Nuclear Technology 5) using exponential biasing in order to increase the Monte Carlo calculation efficiency (Levitt, L. B. (1968) Nuclear Science and Engineering 31, 500-504; Jehouani, A., Ghassoun, J. and Abouker, A. (1994) In Proceedings of the 6th International Symposium on Radiation Physics, Rabat, Morocco) and we have applied the statistical weight method which supposes that the neutron is born at the source with a unit statistical weight and after each collision this weight is corrected. For different values of the scattering probability and for different slopes of the inclined part of the channel we have calculated the neutron transmission probability for different positions of the detectors versus the albedo at the vacuum channel-medium interface. Some analytical representations are also presented for these transmission probabilities. PMID:9463883

  9. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  10. Spectroscopic method for Earth-satellite-Earth laser long-path absorption measurements using Retroreflector In Space (RIS)

    NASA Technical Reports Server (NTRS)

    Sugimoto, Nobuo; Minato, Atsushi; Sasano, Yasuhiro

    1992-01-01

    The Retroreflector in Space (RIS) is a single element cube-corner retroreflector with a diameter of 0.5 m designed for earth-satellite-earth laser long-path absorption experiments. The RIS is to be loaded on the Advanced Earth Observing System (ADEOS) satellite which is scheduled for launch in Feb. 1996. The orbit for ADEOS is a sun synchronous subrecurrent polar-orbit with an inclination of 98.6 deg. It has a period of 101 minutes and an altitude of approximately 800 km. The local time at descending node is 10:15-10:45, and the recurrent period is 41 days. The velocity relative to the ground is approximately 7 km/s. In the RIS experiment, a laser beam transmitted from a ground station is reflected by RIS and received at the ground station. The absorption of the intervening atmosphere is measured in the round-trip optical path.

  11. Assessment of the probability of failure for EC nondestructive testing based on intrusive spectral stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Oudni, Zehor; Féliachi, Mouloud; Mohellebi, Hassane

    2014-06-01

    This work is undertaken to study the reliability of eddy current nondestructive testing (ED-NDT) when the defect concerns a change of physical property of the material. So, an intrusive spectral stochastic finite element method (SSFEM) is developed in the case of 2D electromagnetic harmonic equation. The electrical conductivity is considered as random variable and is developed in series of Hermite polynomials. The developed model is validated from measurements on NDT device and is applied to the assessment of the reliability of failure in steam generator tubing of nuclear power plants. The exploitation of the model concerns the impedance calculation of the sensor and the assessment of the reliability of failure. The random defect geometry is also considered and results are given.

  12. Oscillator strengths and transition probabilities from the Breit–Pauli R-matrix method: Ne IV

    SciTech Connect

    Nahar, Sultana N.

    2014-09-15

    The atomic parameters–oscillator strengths, line strengths, radiative decay rates (A), and lifetimes–for fine structure transitions of electric dipole (E1) type for the astrophysically abundant ion Ne IV are presented. The results include 868 fine structure levels with n≤ 10, l≤ 9, and 1/2≤J≤ 19/2 of even and odd parities, and the corresponding 83,767 E1 transitions. The calculations were carried out using the relativistic Breit–Pauli R-matrix method in the close coupling approximation. The transitions have been identified spectroscopically using an algorithm based on quantum defect analysis and other criteria. The calculated energies agree with the 103 observed and identified energies to within 3% or better for most of the levels. Some larger differences are also noted. The A-values show good to fair agreement with the very limited number of available transitions in the table compiled by NIST, but show very good agreement with the latest published multi-configuration Hartree–Fock calculations. The present transitions should be useful for diagnostics as well as for precise and complete spectral modeling in the soft X-ray to infra-red regions of astrophysical and laboratory plasmas. -- Highlights: •The first application of BPRM method for accurate E1 transitions in Ne IV is reported. •Amount of atomic data (n going up to 10) is complete for most practical applications. •The calculated energies are in very good agreement with most observed levels. •Very good agreement of A-values and lifetimes with other relativistic calculations. •The results should provide precise nebular abundances, chemical evolution etc.

  13. Methods for designing treatments to reduce interior noise of predominant sources and paths in a single engine light aircraft

    NASA Technical Reports Server (NTRS)

    Hayden, Richard E.; Remington, Paul J.; Theobald, Mark A.; Wilby, John F.

    1985-01-01

    The sources and paths by which noise enters the cabin of a small single engine aircraft were determined through a combination of flight and laboratory tests. The primary sources of noise were found to be airborne noise from the propeller and engine casing, airborne noise from the engine exhaust, structureborne noise from the engine/propeller combination and noise associated with air flow over the fuselage. For the propeller, the primary airborne paths were through the firewall, windshield and roof. For the engine, the most important airborne path was through the firewall. Exhaust noise was found to enter the cabin primarily through the panels in the vicinity of the exhaust outlet although exhaust noise entering the cabin through the firewall is a distinct possibility. A number of noise control techniques were tried, including firewall stiffening to reduce engine and propeller airborne noise, to stage isolators and engine mounting spider stiffening to reduce structure-borne noise, and wheel well covers to reduce air flow noise.

  14. Evaluation of a new most-probable-number (MPN) dilution plate method for the enumeration of Escherichia coli in water samples.

    PubMed

    Kodaka, Hidemasa; Saito, Mikako; Matsuoka, Hideaki

    2009-09-01

    The purpose of this study was to evaluate the most-probable-number dilution plate (MPN plate) method developed for the enumeration of Escherichia coil in water samples. Sterilized water was inoculated with E. coli ATCC 11775 to give between 2-1600 MPN/100 ml. The MPN was determined for both the MPN plate and 5-tube methods from the MPN table. The average of the natural logarithm (In) MPN with standard deviations in 95 samples was 4.26 +/- 1.48 by the 5-tube-method and 4.18 +/- 1.45 by the MPN plate method. The correlation coefficient was 0.96. These results were not significantly different according to the paired t-test (p > 0.05).

  15. Using Logistic Regression and Random Forests multivariate statistical methods for landslide spatial probability assessment in North-Est Sicily, Italy

    NASA Astrophysics Data System (ADS)

    Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele

    2015-04-01

    first phase of the work addressed to identify the spatial relationships between the landslides location and the 13 related factors by using the Frequency Ratio bivariate statistical method. The analysis was then carried out by adopting a multivariate statistical approach, according to the Logistic Regression technique and Random Forests technique that gave best results in terms of AUC. The models were performed and evaluated with different sample sizes and also taking into account the temporal variation of input variables such as burned areas by wildfire. The most significant outcome of this work are: the relevant influence of the sample size on the model results and the strong importance of some environmental factors (e.g. land use and wildfires) for the identification of the depletion zones of extremely rapid shallow landslides.

  16. A comparison of fractal methods and probability plots in identifying and mapping soil metal contamination near an active mining area, Iran.

    PubMed

    Geranian, Hamid; Mokhtari, Ahmad Reza; Cohen, David R

    2013-10-01

    Mining activities may contribute significant amounts of metals to surrounding soils. Assessing the potential effects and extent of metal contamination requires the differentiation between geogenic and additional anthropogenic sources. This study compares the use of conventional probability plots with two forms of fractal analysis (number-size and concentration-area) to separate geochemical populations of ore-related elements in agricultural area soils adjacent to Pb-Zn mining operations in the Irankuh Mountains, central Iran. The two general approaches deliver similar spatial groupings of univariate geochemical populations, but the fractal methods provide more distinct separation between populations and require less data manipulation and modeling than the probability plots. The concentration-area fractal approach was more effective than the number-size fractal and probability plotting methods at separating sub-populations within the samples affected by contamination from the mining operations. There is a general lack of association between major elements and ore-related metals in the soils. The background populations display higher relative variation in the major elements than the ore-related metals whereas near the mining operations there is far greater relative variation in the ore-related metals. The extent of the transport of contaminants away from the mine site is partly a function of the greater dispersion of Zn compared with Pb and As, however, the patterns indicate dispersion of contaminants from the mine site is via dust and not surface/groundwater. A combination of geochemical and graphical assessment, with different methods of threshold determination, is shown to be effective in separating geogenic and anthropogenic geochemical patterns.

  17. Calculation for path-domain independent J integral with elasto-viscoplastic consistent tangent operator concept-based boundary element methods

    NASA Astrophysics Data System (ADS)

    Yong, Liu; Qichao, Hong; Lihua, Liang

    1999-05-01

    This paper presents an elasto-viscoplastic consistent tangent operator (CTO) based boundary element formulation, and application for calculation of path-domain independent J integrals (extension of the classical J integrals) in nonlinear crack analysis. When viscoplastic deformation happens, the effective stresses around the crack tip in the nonlinear region is allowed to exceed the loading surface, and the pure plastic theory is not suitable for this situation. The concept of consistency employed in the solution of increment viscoplastic problem, plays a crucial role in preserving the quadratic rate asymptotic convergence of iteractive schemes based on Newton's method. Therefore, this paper investigates the viscoplastic crack problem, and presents an implicit viscoplastic algorithm using the CTO concept in a boundary element framework for path-domain independent J integrals. Applications are presented with two numerical examples for viscoplastic crack problems and J integrals.

  18. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  19. Finding the biased-shortest path with minimal congestion in networks via linear-prediction of queue length

    NASA Astrophysics Data System (ADS)

    Shen, Yi; Ren, Gang; Liu, Yang

    2016-06-01

    In this paper, we propose a biased-shortest path method with minimal congestion. In the method, we use linear-prediction to estimate the queue length of nodes, and propose a dynamic accepting probability function for nodes to decide whether accept or reject the incoming packets. The dynamic accepting probability function is based on the idea of homogeneous network flow and is developed to enable nodes to coordinate their queue length to avoid congestion. A path strategy incorporated with the linear-prediction of the queue length and the dynamic accepting probability function of nodes is designed to allow packets to be automatically delivered on un-congested paths with short traveling time. Our method has the advantage of low computation cost because the optimal paths are dynamically self-organized by nodes in the delivering process of packets with local traffic information. We compare our method with the existing methods such as the efficient path method (EPS) and the optimal path method (OPS) on the BA scale-free networks and a real example. The numerical computations show that our method performs best for low network load and has minimum run time due to its low computational cost and local routing scheme.

  20. Analysis of sample preparation procedures for enumerating fecal coliforms in coarse southwestern U.S. bottom sediments by the most-probable-number method.

    PubMed Central

    Doyle, J D; Tunnicliff, B; Kramer, R E; Brickler, S K

    1984-01-01

    The determination of bacterial densities in aquatic sediments generally requires that a dilution-mixing treatment be used before enumeration of organisms by the most-probable-number fermentation tube method can be done. Differential sediment and organism settling rates may, however, influence the distribution of the microbial population after the dilution-mixing process, resulting in biased bacterial density estimates. For standardization of sample preparation procedures, the influence of settling by suspended sediments on the fecal coliform distribution in a mixing vessel was examined. This was accomplished with both inoculated (Escherichia coli) and raw, uninoculated freshwater sediments from Saguaro Lake, Ariz. Both test sediments were coarse (greater than 90% gravel and sand). Coarse sediments are typical of southwestern U.S. lakes. The distribution of fecal coliforms, as determined by the most-probable-number method, was not significantly influenced by sediment settling and remained homogenous over a 16-min postmix period. The technique developed for coarse sediments may be useful for standardizing sample preparation techniques for other sediment types. PMID:6391380

  1. Analysis of sample preparation procedures for enumerating fecal coliforms in coarse southwestern U.S. bottom sediments by the most-probable-number method.

    PubMed

    Doyle, J D; Tunnicliff, B; Kramer, R E; Brickler, S K

    1984-10-01

    The determination of bacterial densities in aquatic sediments generally requires that a dilution-mixing treatment be used before enumeration of organisms by the most-probable-number fermentation tube method can be done. Differential sediment and organism settling rates may, however, influence the distribution of the microbial population after the dilution-mixing process, resulting in biased bacterial density estimates. For standardization of sample preparation procedures, the influence of settling by suspended sediments on the fecal coliform distribution in a mixing vessel was examined. This was accomplished with both inoculated (Escherichia coli) and raw, uninoculated freshwater sediments from Saguaro Lake, Ariz. Both test sediments were coarse (greater than 90% gravel and sand). Coarse sediments are typical of southwestern U.S. lakes. The distribution of fecal coliforms, as determined by the most-probable-number method, was not significantly influenced by sediment settling and remained homogenous over a 16-min postmix period. The technique developed for coarse sediments may be useful for standardizing sample preparation techniques for other sediment types. PMID:6391380

  2. Measurement of rainfall path attenuation near nadir: A comparison of radar and radiometer methods at 13.8 GHz

    NASA Astrophysics Data System (ADS)

    Durden, S. L.; Haddad, Z. S.; Im, E.; Kitiyakara, A.; Li, F. K.; Tanner, A. B.; Wilson, W. J.

    1995-07-01

    Rain profile retrieval from spaceborne radar is difficult because of the presence of attenuation at the higher frequencies planned for these systems. One way to reduce the ambiguity in the retrieved rainfall profile is to use the path-integrated attenuation as a constraint. Two techniques for measuring the path-integrated attenuation have been proposed: the radar surface reference technique and microwave radiometry. We compare these two techniques using data acquired by the Airborne Rain Mapping Radar (ARMAR) 13.8-GHz airborne radar and radiometer during the Tropical Ocean-Global Atmosphere Coupled Ocean Atmosphere Response Experiment (TOGA COARE) in the western Pacific Ocean in early 1993. The two techniques have a mean difference close to zero for both nadir and 10° incidence. The RMS difference is 1.4 dB and is reduced to 1 dB or less if points where the radiometer was likely saturated are excluded. Part of the RMS difference can be attributed to variability in the ocean surface cross section due to wind effects and possibly rain effects. The results presented here are relevant for the Tropical Rainfall Measuring Mission, which will include a 13.8-GHz precipitation radar.

  3. Asymptotics of Selberg-like integrals by lattice path counting

    SciTech Connect

    Novaes, Marcel

    2011-04-15

    We obtain explicit expressions for positive integer moments of the probability density of eigenvalues of the Jacobi and Laguerre random matrix ensembles, in the asymptotic regime of large dimension. These densities are closely related to the Selberg and Selberg-like multidimensional integrals. Our method of solution is combinatorial: it consists in the enumeration of certain classes of lattice paths associated to the solution of recurrence relations.

  4. Nonadiabatic transition path sampling

    NASA Astrophysics Data System (ADS)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  5. Detection of genetically modified microorganisms in soil using the most-probable-number method with multiplex PCR and DNA dot blot.

    PubMed

    Yeom, Jinki; Lee, Yunho; Noh, Jaemin; Jung, Jaejoon; Park, Jungsoon; Seo, Hyoju; Kim, Jisun; Han, Jiwon; Jeon, Che Ok; Kim, Taesung; Park, Woojun

    2011-10-01

    The principal objective of this study was to detect genetically modified microorganisms (GMMs) that might be accidentally released into the environment from laboratories. Two methods [plate counting and most-probable-number (MPN)] coupled with either multiplex PCR or DNA dot blots were compared using genetically modified Escherichia coli, Pseudomonas putida, and Acinetobacter oleivorans harboring an antibiotic-resistance gene with additional gfp and lacZ genes as markers. Alignments of sequences collected from databases using the Perl scripting language (Perl API) and from denaturing gradient gel electrophoresis analysis revealed that the gfp, lacZ and antibiotic-resistance genes (kanamycin, tetracycline, and ampicillin) in GMMs differed from the counterpart genes in many sequenced genomes and in soil DNA. Thus, specific multiplex PCR primer sets for detection of plasmid-based gfp and lacZ antibiotic-resistance genes could be generated. In the plate counting method, many antibiotic-resistant bacteria from a soil microcosm grew as colonies on antibiotic-containing agar plates. The multiplex PCR verification of randomly selected antibiotic-resistant colonies with specific primers proved ineffective. The MPN-multiplex PCR method and antibiotic-resistant phenotype could be successfully used to detect GMMs, although this method is quite laborious. The MPN-DNA dot blot method screened more cells at a time in a microtiter plate containing the corresponding antibiotics, and was shown to be a more efficient method for the detection of GMMs in soil using specific probes in terms of labor and accuracy. PMID:21810467

  6. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    2003-01-01

    The T-year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T-year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at-site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100-year flood will occur on the average every 4,5 years.

  7. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  8. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  9. Double path integral method for obtaining the mobility of the one-dimensional charge transport in molecular chain.

    PubMed

    Yoo-Kong, Sikarin; Liewrian, Watchara

    2015-12-01

    We report on a theoretical investigation concerning the polaronic effect on the transport properties of a charge carrier in a one-dimensional molecular chain. Our technique is based on the Feynman's path integral approach. Analytical expressions for the frequency-dependent mobility and effective mass of the carrier are obtained as functions of electron-phonon coupling. The result exhibits the crossover from a nearly free particle to a heavily trapped particle. We find that the mobility depends on temperature and decreases exponentially with increasing temperature at low temperature. It exhibits large polaronic-like behaviour in the case of weak electron-phonon coupling. These results agree with the phase transition (A.S. Mishchenko et al., Phys. Rev. Lett. 114, 146401 (2015)) of transport phenomena related to polaron motion in the molecular chain. PMID:26701710

  10. Integrating the probability integral method for subsidence prediction and differential synthetic aperture radar interferometry for monitoring mining subsidence in Fengfeng, China

    NASA Astrophysics Data System (ADS)

    Diao, Xinpeng; Wu, Kan; Zhou, Dawei; Li, Liang

    2016-01-01

    Differential synthetic aperture radar interferometry (D-InSAR) is characterized mainly by high spatial resolution and high accuracy over a wide coverage range. Because of its unique advantages, the technology is widely used for monitoring ground surface deformations. However, in coal mining areas, the ground surface can suffer large-scale collapses in short periods of time, leading to inaccuracies in D-InSAR results and limiting its use for monitoring mining subsidence. We propose a data-processing method that overcomes these disadvantages by combining D-InSAR with the probability integral method used for predicting mining subsidence. Five RadarSat-2 images over Fengfeng coal mine, China, were used to demonstrate the proposed method and assess its effectiveness. Using this method, surface deformation could be monitored over an area of thousands of square kilometers, and more than 50 regions affected by subsidence were identified. For Jiulong mine, nonlinear subsidence cumulative results were obtained for a time period from January 2011 to April 2011, and the maximum subsidence value reached up to 299 mm. Finally, the efficiency and applicability of the proposed method were verified by comparing with data from leveling surveying.

  11. SWAPDT: A method for Short-time Withering Assessment of Probability for Drought Tolerance in Camellia sinensis validated by targeted metabolomics.

    PubMed

    Nyarukowa, Christopher; Koech, Robert; Loots, Theodor; Apostolides, Zeno

    2016-07-01

    Climate change is causing droughts affecting crop production on a global scale. Classical breeding and selection strategies for drought-tolerant cultivars will help prevent crop losses. Plant breeders, for all crops, need a simple and reliable method to identify drought-tolerant cultivars, but such a method is missing. Plant metabolism is often disrupted by abiotic stress conditions. To survive drought, plants reconfigure their metabolic pathways. Studies have documented the importance of metabolic regulation, i.e. osmolyte accumulation such as polyols and sugars (mannitol, sorbitol); amino acids (proline) during drought. This study identified and quantified metabolites in drought tolerant and drought susceptible Camellia sinensis cultivars under wet and drought stress conditions. For analyses, GC-MS and LC-MS were employed for metabolomics analysis.%RWC results show how the two drought tolerant and two drought susceptible cultivars differed significantly (p≤0.05) from one another; the drought susceptible exhibited rapid water loss compared to the drought tolerant. There was a significant variation (p<0.05) in metabolite content (amino acid, sugars) between drought tolerant and drought susceptible tea cultivars after short-time withering conditions. These metabolite changes were similar to those seen in other plant species under drought conditions, thus validating this method. The Short-time Withering Assessment of Probability for Drought Tolerance (SWAPDT) method presented here provides an easy method to identify drought tolerant tea cultivars that will mitigate the effects of drought due to climate change on crop losses.

  12. Comparison of nine brands of membrane filter and the most-probable-number methods for total coliform enumeration in sewage-contaminated drinking water.

    PubMed

    Tobin, R S; Lomax, P; Kushner, D J

    1980-08-01

    Nine different brands of membrane filter were compared in the membrane filtration (MF) method, and those with the highest yields were compared against the most-probable-number (MPN) multiple-tube method for total coliform enumeration in simulated sewage-contaminated tap water. The water was chlorinated for 30 min to subject the organisms to stresses similar to those encountered during treatment and distribution of drinking water. Significant differences were observed among membranes in four of the six experiments, with two- to four-times-higher recoveries between the membranes at each extreme of recovery. When results from the membranes with the highest total coliform recovery rate were compared with the MPN results, the MF results were found significantly higher in one experiment and equivalent to the MPN results in the other five experiments. A comparison was made of the species enumerated by these methods; in general the two methods enumerated a similar spectrum of organisms, with some indication that the MF method was subject to greater interference by Aeromonas.

  13. Improved methods for Feynman path integral calculations of vibrational-rotational free energies and application to isotopic fractionation of hydrated chloride ions.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2009-04-23

    We present two enhancements to our methods for calculating vibrational-rotational free energies by Feynman path integrals, namely, a sequential sectioning scheme for efficiently generating random free-particle paths and a stratified sampling scheme that uses the energy of the path centroids. These improved methods are used with three interaction potentials to calculate equilibrium constants for the fractionation behavior of Cl(-) hydration in the presence of a gas-phase mixture of H(2)O, D(2)O, and HDO. Ion cyclotron resonance experiments indicate that the equilibrium constant, K(eq), for the reaction Cl(H(2)O)(-) + D(2)O right harpoon over left harpoon Cl(D(2)O)(-) + H(2)O is 0.76, whereas the three theoretical predictions are 0.946, 0.979, and 1.20. Similarly, the experimental K(eq) for the Cl(H(2)O)(-) + HDO right harpoon over left harpoon Cl(HDO)(-) + H(2)O reaction is 0.64 as compared to theoretical values of 0.972, 0.998, and 1.10. Although Cl(H(2)O)(-) has a large degree of anharmonicity, K(eq) values calculated with the harmonic oscillator rigid rotator (HORR) approximation agree with the accurate treatment to within better than 2% in all cases. Results of a variety of electronic structure calculations, including coupled cluster and multireference configuration interaction calculations, with either the HORR approximation or with anharmonicity estimated via second-order vibrational perturbation theory, all agree well with the equilibrium constants obtained from the analytical surfaces.

  14. Development of a curved ray tracing method for modeling of phase paths from GPS radio occultation: A two-dimensional study

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon; Kuo, Ying-Hwa; Lee, Dong-Kyou

    2010-12-01

    A two-dimensional curved ray tracer (CRT) is developed to study the propagation path of radio signals across a heterogeneous planetary atmosphere. The method, designed to achieve improvements in both computational efficiency and accuracy over conventional straight-line methods, takes rays' first-order bending into account to better describe curved raypaths in the stratified atmosphere. CRT is then used to simulate the phase path from GPS radio occultation (RO). The merit of the ray tracing approach in GPS RO is explicit consideration of horizontal variation in the atmosphere, which may lead to a sizable error but is disregarded in traditional retrieval schemes. In addition, direct modeling of the phase path takes advantage of simple error characteristics in the measurement. With provision of ionospheric and neutral atmospheric refractive indices, in this effort, rays are traced along the full range of GPS-low Earth orbiting (LEO) radio links just as the measurements are made in real life. Here, ray shooting is employed to realize the observed radio links with controlled accuracy. CRT largely reproduces the very measured characteristics of GPS signals. When compared, the measured and simulated phases show remarkable agreement. The cross validation between CRT and GPS RO has confirmed not only the strength of CRT but also the high accuracy of GPS RO measurements. The primary motivation for this study is enabling effective quality control for GPS RO data, overcoming a complicated error structure in the high-level data. CRT has also shown a great deal of potential for improved utilization of GPS RO data for geophysical research.

  15. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  16. On Probability Domains III

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2015-12-01

    Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.

  17. Enumeration of Enterobacteriaceae in various foods with a new automated most-probable-number method compared with petrifilm and international organization for standardization procedures.

    PubMed

    Paulsen, P; Borgetti, C; Schopf, E; Smulders, F J M

    2008-02-01

    An automated most-probable-number (MPN) system (TEMPO, bioMérieux, Marcy l'Etoile, France) for enumeration of Enterobacteriaceae (EB) was compared with methods involving violet red bile glucose agar (VRBG) (International Organization for Standardization [ISO] method 21528-2) (ISO-VRBG) and Petrifilm (PF-EB). The MPN partitioning (three different volumes with 16 replicates of each) is done automatically in a disposable card. Bacterial growth is indicated by acid production from sugars, lowering the pH of the medium, and quenching the fluorescence of 4-methylumbelliferone. After incubation, the number of nonfluorescent wells is read in a separate device, and the MPN is calculated automatically. A total of 411 naturally contaminated foods were tested, and 190 were in the detection range for all methods. For these results, the mean (+/- standard deviation) counts were 2.540 +/- 1.026, 2.547 +/- 0.995, and 2.456 +/- 1.014 log CFU/g for the ISO-VRBG, PF-EB, and automated MPN methods, respectively. Mean differences were -0.084 +/- 0.460 log units for the automated MPN results compared with the ISO-VRBG and 0.007 +/- 0.450 for the PF-EB results compared with the ISO-VRBG results. The automated MPN method tends to yield lower numbers and the PF-EB method tends to yield higher numbers than does the ISO-VRBG method (difference not significant; Kruskal-Wallis test, P = 0.102). Thus, the average difference was highest between the automated MPN method and the PF-EB method (-0.091 +/- 0.512 log units). Differences between the automated MPN and ISO-VRBG results of > 1 log unit were detected in 3.4% of all samples. For 3.9% of the samples, one comparison yielded differences of < 1 log CFU/g and the other yielded > 1 but < 2 log CFU/g, which means that the differences are possibly > 1 log CFU/g. For the ISO-VRBG method, confirmation of isolates was necessary to avoid a bias due to the presence of oxidase-positive glucose-fermenting colonies. The automated MPN system yielded results

  18. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    NASA Astrophysics Data System (ADS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high

  19. Review of pipe-break probability assessment methods and data for applicability to the advanced neutron source project for Oak Ridge National Laboratory

    SciTech Connect

    Fullwood, R.R.

    1989-04-01

    The Advanced Neutron Source (ANS) (Difilippo, 1986; Gamble, 1986; West, 1986; Selby, 1987) will be the world's best facility for low energy neutron research. This performance requires the highest flux density of all non-pulsed reactors with concomitant low thermal inertial and fast response to upset conditions. One of the primary concerns is that a flow cessation of the order of a second may result in fuel damage. Such a flow stoppage could be the result of break in the primary piping. This report is a review of methods for assessing pipe break probabilities based on historical operating experience in power reactors, scaling methods, fracture mechanics and fracture growth models. The goal of this work is to develop parametric guidance for the ANS design to make the event highly unlikely. It is also to review and select methods that may be used in an interactive IBM-PC model providing fast and reasonably accurate models to aid the ANS designers in achieving the safety requirements. 80 refs., 7 figs.

  20. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  1. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  2. Combining the least cost path method with population genetic data and species distribution models to identify landscape connectivity during the late Quaternary in Himalayan hemlock.

    PubMed

    Yu, Haibin; Zhang, Yili; Liu, Linshan; Qi, Wei; Li, Shicheng; Hu, Zhongjun

    2015-12-01

    Himalayan hemlock (Tsuga dumosa) experienced a recolonization event during the Quaternary period; however, the specific dispersal routes are remain unknown. Recently, the least cost path (LCP) calculation coupled with population genetic data and species distribution models has been applied to reveal the landscape connectivity. In this study, we utilized the categorical LCP method, combining species distribution of three periods (the last interglacial, the last glacial maximum, and the current period) and locality with shared chloroplast, mitochondrial, and nuclear haplotypes, to identify the possible dispersal routes of T. dumosa in the late Quaternary. Then, both a coalescent estimate of migration rates among regional groups and establishment of genetic divergence pattern were conducted. After those analyses, we found that the species generally migrated along the southern slope of Himalaya across time periods and genomic makers, and higher degree of dispersal was in the present and mtDNA haplotype. Furthermore, the direction of range shifts and strong level of gene flow also imply the existence of Himalayan dispersal path, and low area of genetic divergence pattern suggests that there are not any obvious barriers against the dispersal pathway. Above all, we inferred that a dispersal route along the Himalaya Mountains could exist, which is an important supplement for the evolutionary history of T. dumosa. Finally, we believed that this integrative genetic and geospatial method would bring new implications for the evolutionary process and conservation priority of species in the Tibetan Plateau.

  3. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models.

  4. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  5. Reaching the Hard-to-Reach: A Probability Sampling Method for Assessing Prevalence of Driving under the Influence after Drinking in Alcohol Outlets

    PubMed Central

    De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite

    2012-01-01

    Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620

  6. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  7. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decision maker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its

  8. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decisionmaker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its content

  9. A Microwave Radiometric Method to Obtain the Average Path Profile of Atmospheric Temperature and Humidity Structure Parameters and Its Application to Optical Propagation System Assessment

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.; Vyhnalek, Brian E.

    2015-01-01

    The values of the key atmospheric propagation parameters Ct2, Cq2, and Ctq are highly dependent upon the vertical height within the atmosphere thus making it necessary to specify profiles of these values along the atmospheric propagation path. The remote sensing method suggested and described in this work makes use of a rapidly integrating microwave profiling radiometer to capture profiles of temperature and humidity through the atmosphere. The integration times of currently available profiling radiometers are such that they are approaching the temporal intervals over which one can possibly make meaningful assessments of these key atmospheric parameters. Since these parameters are fundamental to all propagation conditions, they can be used to obtain Cn2 profiles for any frequency, including those for an optical propagation path. In this case the important performance parameters of the prevailing isoplanatic angle and Greenwood frequency can be obtained. The integration times are such that Kolmogorov turbulence theory and the Taylor frozen-flow hypothesis must be transcended. Appropriate modifications to these classical approaches are derived from first principles and an expression for the structure functions are obtained. The theory is then applied to an experimental scenario and shows very good results.

  10. Sensitive quantitative detection of Ralstonia solanacearum in soil by the most probable number-polymerase chain reaction (MPN-PCR) method.

    PubMed

    Inoue, Yasuhiro; Nakaho, Kazuhiro

    2014-05-01

    We developed a sensitive quantitative assay for detecting Ralstonia solanacearum in soil by most probable number (MPN) analysis based on bio-PCR results. For development of the detection method, we optimized an elution buffer containing 5 g/L skim milk for extracting bacteria from soil and reducing contamination of polymerase inhibitors in soil extracts. Because R. solanacearum can grow in water without any added nutrients, we used a cultivation buffer in the culture step of the bio-PCR that contained only the buffer and antibiotics to suppress the growth of other soil microorganisms. To quantify the bacterial population in soil, the elution buffer was added to 10 g soil on a dry weight basis so that the combined weight of buffer, soil, and soil-water was 50 g; 5 mL of soil extract was assumed to originate from 1 g of soil. The soil extract was divided into triplicate aliquots each of 5 mL and 500, 50, and 5 μL. Each aliquot was diluted with the cultivation buffer and incubated at 35 °C for about 24 h. After incubation, 5 μL of culture was directly used for nested PCR. The number of aliquots showing positive results was collectively checked against the MPN table. The method could quantify bacterial populations in soil down to 3 cfu/10 g dried soil and was successfully applied to several types of soil. We applied the method for the quantitative detection of R. solanacearum in horticultural soils, which could quantitatively detect small populations (9.3 cfu/g), but the semiselective media were not able to detect the bacteria.

  11. Probability on a Budget.

    ERIC Educational Resources Information Center

    Ewbank, William A.; Ginther, John L.

    2002-01-01

    Describes how to use common dice numbered 1-6 for simple mathematical situations including probability. Presents a lesson using regular dice and specially marked dice to explore some of the concepts of probability. (KHR)

  12. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach

    SciTech Connect

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes.

  13. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach.

    PubMed

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes. PMID:26590526

  14. Incorporating a completely renormalized coupled cluster approach into a composite method for thermodynamic properties and reaction paths

    SciTech Connect

    Nedd, Sean; DeYonker, Nathan; Wilson, Angela; Piecuch, Piotr; Gordon, Mark

    2012-04-12

    The correlation consistent composite approach (ccCA), using the S4 complete basis set two-point extrapolation scheme (ccCA-S4), has been modified to incorporate the left-eigenstate completely renormalized coupled cluster method, including singles, doubles, and non-iterative triples (CR-CC(2,3)) as the highest level component. The new ccCA-CC(2,3) method predicts thermodynamic properties with an accuracy that is similar to that of the original ccCA-S4 method. At the same time, the inclusion of the single-reference CR-CC(2,3) approach provides a ccCA scheme that can correctly treat reaction pathways that contain certain classes of multi-reference species such as diradicals, which would normally need to be treated by more computationally demanding multi-reference methods. The new ccCA-CC(2,3) method produces a mean absolute deviation of 1.7 kcal/mol for predicted heats of formation at 298 K, based on calibration with the G2/97 set of 148 molecules, which is comparable to that of 1.0 kcal/mol obtained using the ccCA-S4 method, while significantly improving the performance of the ccCA-S4 approach in calculations involving more demanding radical and diradical species. Both the ccCA-CC(2,3) and ccCA-S4 composite methods are used to characterize the conrotatory and disrotatory isomerization pathways of bicyclo[1.1.0]butane to trans-1,3-butadiene, for which conventional coupled cluster methods, such as the CCSD(T) approach used in the ccCA-S4 model and, in consequence, the ccCA-S4 method itself might fail by incorrectly placing the disrotatory pathway below the conrotatory one. The ccCA-CC(2,3) scheme provides correct pathway ordering while providing an accurate description of the activation and reaction energies characterizing the lowest-energy conrotatory pathway. The ccCA-CC(2,3) method is thus a viable method for the analyses of reaction mechanisms that have significant multi-reference character, and presents a generally less computationally intensive alternative to

  15. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.

  16. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  17. Is quantum probability rational?

    PubMed

    Houston, Alasdair I; Wiesner, Karoline

    2013-06-01

    We concentrate on two aspects of the article by Pothos & Busemeyer (P&B): the relationship between classical and quantum probability and quantum probability as a basis for rational decisions. We argue that the mathematical relationship between classical and quantum probability is not quite what the authors claim. Furthermore, it might be premature to regard quantum probability as the best practical rational scheme for decision making.

  18. Predicted probabilities' relationship to inclusion probabilities.

    PubMed

    Fang, Di; Chong, Jenny; Wilson, Jeffrey R

    2015-05-01

    It has been shown that under a general multiplicative intercept model for risk, case-control (retrospective) data can be analyzed by maximum likelihood as if they had arisen prospectively, up to an unknown multiplicative constant, which depends on the relative sampling fraction. (1) With suitable auxiliary information, retrospective data can also be used to estimate response probabilities. (2) In other words, predictive probabilities obtained without adjustments from retrospective data will likely be different from those obtained from prospective data. We highlighted this using binary data from Medicare to determine the probability of readmission into the hospital within 30 days of discharge, which is particularly timely because Medicare has begun penalizing hospitals for certain readmissions. (3).

  19. Opportunity's Path

    NASA Technical Reports Server (NTRS)

    2004-01-01

    fifteen sols. This will include El Capitan and probably one to two other areas.

    Blue Dot Dates Sol 7 / Jan 31 = Egress & first soil data collected by instruments on the arm Sol 9 / Feb 2 = Second Soil Target Sol 12 / Feb 5 = First Rock Target Sol 16 / Feb 9 = Alpha Waypoint Sol 17 / Feb 10 = Bravo Waypoint Sol 19 or 20 / Feb 12 or 13 = Charlie Waypoint

  20. Racing To Understand Probability.

    ERIC Educational Resources Information Center

    Van Zoest, Laura R.; Walker, Rebecca K.

    1997-01-01

    Describes a series of lessons designed to supplement textbook instruction of probability by addressing the ideas of "equally likely,""not equally likely," and "fairness," as well as to introduce the difference between theoretical and experimental probability. Presents four lessons using The Wind Racer games to study probability. (ASK)

  1. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  2. Searching with probabilities

    SciTech Connect

    Palay, A.J.

    1985-01-01

    This book examines how probability distributions can be used as a knowledge representation technique. It presents a mechanism that can be used to guide a selective search algorithm to solve a variety of tactical chess problems. Topics covered include probabilities and searching the B algorithm and chess probabilities - in practice, examples, results, and future work.

  3. Forecasting the path of a laterally propagating dike

    NASA Astrophysics Data System (ADS)

    Heimisson, Elías Rafn; Hooper, Andrew; Sigmundsson, Freysteinn

    2015-12-01

    An important aspect of eruption forecasting is predicting the path of propagating dikes. We show how lateral dike propagation can be forecast using the minimum potential energy principle. We compare theory to observed propagation paths of dikes originating at the Bárðarbunga volcano, Iceland, in 2014 and 1996, by developing a probability distribution for the most likely propagation path. The observed propagation paths agree well with the model prediction. We find that topography is very important for the model, and our preferred forecasting model considers its influence on the potential energy change of the crust and magma. We tested the influence of topography by running the model assuming no topography and found that the path of the 2014 dike could not be hindcasted. The results suggest that lateral dike propagation is governed not only by deviatoric stresses but also by pressure gradients and gravitational potential energy. Furthermore, the model predicts the formation of curved dikes around cone-shaped structures without the assumption of a local deviatoric stress field. We suggest that a likely eruption site for a laterally propagating dike is in topographic lows. The method presented here is simple and computationally feasible. Our results indicate that this kind of a model can be applied to mitigate volcanic hazards in regions where the tectonic setting promotes formation of laterally propagating vertical intrusive sheets.

  4. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the "unusual" properties of the WV, and avoid the "paradoxes" often associated with the WM.

  5. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  6. In All Probability, Probability is not All

    ERIC Educational Resources Information Center

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  7. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  8. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  9. Spaces of paths and the path topology

    NASA Astrophysics Data System (ADS)

    Low, Robert J.

    2016-09-01

    The natural topology on the space of causal paths of a space-time depends on the topology chosen on the space-time itself. Here we consider the effect of using the path topology on space-time instead of the manifold topology, and its consequences for how properties of space-time are reflected in the structure of the space of causal paths.

  10. Identifying the main paths of information diffusion in online social networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hengmin; Yin, Xicheng; Ma, Jing; Hu, Wei

    2016-06-01

    Recently, an increasing number of researches on relationship strength show that there are some socially active links in online social networks. Furthermore, it is likely that there exist main paths which play the most significant role in the process of information diffusion. Although much of previous work has focused on the pathway of a specific event, there are hardly any scholars that have extracted the main paths. To identify the main paths of online social networks, we proposed a method which measures the weights of links based on historical interaction records. The influence of node based on forwarding amount is quantified and top-ranked nodes are selected as the influential users. The path importance is evaluated by calculating the probability that a message would spread via this path. We applied our method to a real-world network and found interesting insights. Each influential user can access another one via a short main path and the distribution of main paths shows significant community effect.

  11. An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D

    NASA Technical Reports Server (NTRS)

    Thompson, Kyle B.; Gnoffo, Peter A.

    2016-01-01

    An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.

  12. Introducing a Method for Calculating the Allocation of Attention in a Cognitive “Two-Armed Bandit” Procedure: Probability Matching Gives Way to Maximizing

    PubMed Central

    Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor

    2016-01-01

    We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the

  13. Shortest path and Schramm-Loewner Evolution

    PubMed Central

    Posé, N.; Schrenk, K. J.; Araújo, N. A. M.; Herrmann, H. J.

    2014-01-01

    We numerically show that the statistical properties of the shortest path on critical percolation clusters are consistent with the ones predicted for Schramm-Loewner evolution (SLE) curves for κ = 1.04 ± 0.02. The shortest path results from a global optimization process. To identify it, one needs to explore an entire area. Establishing a relation with SLE permits to generate curves statistically equivalent to the shortest path from a Brownian motion. We numerically analyze the winding angle, the left passage probability, and the driving function of the shortest path and compare them to the distributions predicted for SLE curves with the same fractal dimension. The consistency with SLE opens the possibility of using a solid theoretical framework to describe the shortest path and it raises relevant questions regarding conformal invariance and domain Markov properties, which we also discuss. PMID:24975019

  14. Converging towards the optimal path to extinction

    PubMed Central

    Schwartz, Ira B.; Forgoston, Eric; Bianco, Simone; Shaw, Leah B.

    2011-01-01

    Extinction appears ubiquitously in many fields, including chemical reactions, population biology, evolution and epidemiology. Even though extinction as a random process is a rare event, its occurrence is observed in large finite populations. Extinction occurs when fluctuations owing to random transitions act as an effective force that drives one or more components or species to vanish. Although there are many random paths to an extinct state, there is an optimal path that maximizes the probability to extinction. In this paper, we show that the optimal path is associated with the dynamical systems idea of having maximum sensitive dependence to initial conditions. Using the equivalence between the sensitive dependence and the path to extinction, we show that the dynamical systems picture of extinction evolves naturally towards the optimal path in several stochastic models of epidemics. PMID:21571943

  15. Knowledge typology for imprecise probabilities.

    SciTech Connect

    Wilson, G. D.; Zucker, L. J.

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  16. A Comparison of Two Path Planners for Planetary Rovers

    NASA Astrophysics Data System (ADS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  17. General polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method applied to small atoms, ions, and molecules at finite temperatures

    NASA Astrophysics Data System (ADS)

    Tiihonen, Juha; Kylänpää, Ilkka; Rantala, Tapio T.

    2016-09-01

    The nonlinear optical properties of matter have a broad relevance and many methods have been invented to compute them from first principles. However, the effects of electronic correlation, finite temperature, and breakdown of the Born-Oppenheimer approximation have turned out to be challenging and tedious to model. Here we propose a straightforward approach and derive general field-free polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method. The estimators are applied to small atoms, ions, and molecules with one or two electrons. With the adiabatic, i.e., Born-Oppenheimer, approximation we obtain accurate tensorial ground state polarizabilities, while the nonadiabatic simulation adds in considerable rovibrational effects and thermal coupling. In both cases, the 0 K, or ground-state, limit is in excellent agreement with the literature. Furthermore, we report here the internal dipole moment of PsH molecule, the temperature dependence of the polarizabilities of H-, and the average dipole polarizabilities and the ground-state hyperpolarizabilities of HeH+ and H 3 + .

  18. The absolute path command

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  19. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  20. Nucleosomes, transcription, and probability

    PubMed Central

    Boeger, Hinrich

    2014-01-01

    Speaking of current measurements on single ion channel molecules, David Colquhoun wrote in 2006, “Individual molecules behave randomly, so suddenly we had to learn how to deal with stochastic processes.” Here I describe theoretical efforts to understand recent experimental observations on the chromatin structure of single gene molecules, a molecular biologist's path toward probabilistic theories. PMID:25368419

  1. Comparing the mannitol-egg yolk-polymyxin agar plating method with the three-tube most-probable-number method for enumeration of Bacillus cereus spores in raw and high-temperature, short-time pasteurized milk.

    PubMed

    Harper, Nigel M; Getty, Kelly J K; Schmidt, Karen A; Nutsch, Abbey L; Linton, Richard H

    2011-03-01

    The U.S. Food and Drug Administration's Bacteriological Analytical Manual recommends two enumeration methods for Bacillus cereus: (i) standard plate count method with mannitol-egg yolk-polymyxin (MYP) agar and (ii) a most-probable-number (MPN) method with tryptic soy broth (TSB) supplemented with 0.1% polymyxin sulfate. This study compared the effectiveness of MYP and MPN methods for detecting and enumerating B. cereus in raw and high-temperature, short-time pasteurized skim (0.5%), 2%, and whole (3.5%) bovine milk stored at 4°C for 96 h. Each milk sample was inoculated with B. cereus EZ-Spores and sampled at 0, 48, and 96 h after inoculation. There were no differences (P > 0.05) in B. cereus populations among sampling times for all milk types, so data were pooled to obtain overall mean values for each treatment. The overall B. cereus population mean of pooled sampling times for the MPN method (2.59 log CFU/ml) was greater (P < 0.05) than that for the MYP plate count method (1.89 log CFU/ml). B. cereus populations in the inoculated milk samples ranged from 2.36 to 3.46 and 2.66 to 3.58 log CFU/ml for inoculated milk treatments for the MYP plate count and MPN methods, respectively, which is below the level necessary for toxin production. The MPN method recovered more B. cereus, which makes it useful for validation research. However, the MYP plate count method for enumeration of B. cereus also had advantages, including its ease of use and faster time to results (2 versus 5 days for MPN).

  2. Human-machine teaming for effective estimation and path planning

    NASA Astrophysics Data System (ADS)

    McCourt, Michael J.; Mehta, Siddhartha S.; Doucette, Emily A.; Curtis, J. Willard

    2016-05-01

    While traditional sensors provide accurate measurements of quantifiable information, humans provide better qualitative information and holistic assessments. Sensor fusion approaches that team humans and machines can take advantage of the benefits provided by each while mitigating the shortcomings. These two sensor sources can be fused together using Bayesian fusion, which assumes that there is a method of generating a probabilistic representation of the sensor measurement. This general framework of fusing estimates can also be applied to joint human-machine decision making. In the simple case, binary decisions can be fused by using a probability of taking an action versus inaction from each decision-making source. These are fused together to arrive at a final probability of taking an action, which would be taken if above a specified threshold. In the case of path planning, rather than binary decisions being fused, complex decisions can be fused by allowing the human and machine to interact with each other. For example, the human can draw a suggested path while the machine planning algorithm can refine it to avoid obstacles and remain dynamically feasible. Similarly, the human can revise a suggested path to achieve secondary goals not encoded in the algorithm such as avoiding dangerous areas in the environment.

  3. Abstract Models of Probability

    NASA Astrophysics Data System (ADS)

    Maximov, V. M.

    2001-12-01

    Probability theory presents a mathematical formalization of intuitive ideas of independent events and a probability as a measure of randomness. It is based on axioms 1-5 of A.N. Kolmogorov 1 and their generalizations 2. Different formalized refinements were proposed for such notions as events, independence, random value etc., 2,3, whereas the measure of randomness, i.e. numbers from [0,1], remained unchanged. To be precise we mention some attempts of generalization of the probability theory with negative probabilities 4. From another side the physicists tryed to use the negative and even complex values of probability to explain some paradoxes in quantum mechanics 5,6,7. Only recently, the necessity of formalization of quantum mechanics and their foundations 8 led to the construction of p-adic probabilities 9,10,11, which essentially extended our concept of probability and randomness. Therefore, a natural question arises how to describe algebraic structures whose elements can be used as a measure of randomness. As consequence, a necessity arises to define the types of randomness corresponding to every such algebraic structure. Possibly, this leads to another concept of randomness that has another nature different from combinatorical - metric conception of Kolmogorov. Apparenly, discrepancy of real type of randomness corresponding to some experimental data lead to paradoxes, if we use another model of randomness for data processing 12. Algebraic structure whose elements can be used to estimate some randomness will be called a probability set Φ. Naturally, the elements of Φ are the probabilities.

  4. Tackling higher derivative ghosts with the Euclidean path integral

    SciTech Connect

    Fontanini, Michele; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in the most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.

  5. Probability with Roulette

    ERIC Educational Resources Information Center

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  6. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  7. Probability distributions for multimeric systems.

    PubMed

    Albert, Jaroslav; Rooman, Marianne

    2016-01-01

    We propose a fast and accurate method of obtaining the equilibrium mono-modal joint probability distributions for multimeric systems. The method necessitates only two assumptions: the copy number of all species of molecule may be treated as continuous; and, the probability density functions (pdf) are well-approximated by multivariate skew normal distributions (MSND). Starting from the master equation, we convert the problem into a set of equations for the statistical moments which are then expressed in terms of the parameters intrinsic to the MSND. Using an optimization package on Mathematica, we minimize a Euclidian distance function comprising of a sum of the squared difference between the left and the right hand sides of these equations. Comparison of results obtained via our method with those rendered by the Gillespie algorithm demonstrates our method to be highly accurate as well as efficient.

  8. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  9. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.

  10. Gibbs Ensembles of Nonintersecting Paths

    NASA Astrophysics Data System (ADS)

    Borodin, Alexei; Shlosman, Senya

    2010-01-01

    We consider a family of determinantal random point processes on the two-dimensional lattice and prove that members of our family can be interpreted as a kind of Gibbs ensembles of nonintersecting paths. Examples include probability measures on lozenge and domino tilings of the plane, some of which are non-translation-invariant. The correlation kernels of our processes can be viewed as extensions of the discrete sine kernel, and we show that the Gibbs property is a consequence of simple linear relations satisfied by these kernels. The processes depend on infinitely many parameters, which are closely related to parametrization of totally positive Toeplitz matrices.

  11. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  12. Is probability of frequency too narrow?

    SciTech Connect

    Martz, H.F.

    1993-10-01

    Modern methods of statistical data analysis, such as empirical and hierarchical Bayesian methods, should find increasing use in future Probabilistic Risk Assessment (PRA) applications. In addition, there will be a more formalized use of expert judgment in future PRAs. These methods require an extension of the probabilistic framework of PRA, in particular, the popular notion of probability of frequency, to consideration of frequency of frequency, frequency of probability, and probability of probability. The genesis, interpretation, and examples of these three extended notions are discussed.

  13. Identifying decohering paths in closed quantum systems

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1990-01-01

    A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

  14. A novel method for patient exit and entrance dose prediction based on water equivalent path length measured with an amorphous silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Kavuma, Awusi; Glegg, Martin; Metwaly, Mohamed; Currie, Garry; Elliott, Alex

    2010-01-01

    In vivo dosimetry is one of the quality assurance tools used in radiotherapy to monitor the dose delivered to the patient. Electronic portal imaging device (EPID) images for a set of solid water phantoms of varying thicknesses were acquired and the data fitted onto a quadratic equation, which relates the reduction in photon beam intensity to the attenuation coefficient and material thickness at a reference condition. The quadratic model is used to convert the measured grey scale value into water equivalent path length (EPL) at each pixel for any material imaged by the detector. For any other non-reference conditions, scatter, field size and MU variation effects on the image were corrected by relative measurements using an ionization chamber and an EPID. The 2D EPL is linked to the percentage exit dose table, for different thicknesses and field sizes, thereby converting the plane pixel values at each point into a 2D dose map. The off-axis ratio is corrected using envelope and boundary profiles generated from the treatment planning system (TPS). The method requires field size, monitor unit and source-to-surface distance (SSD) as clinical input parameters to predict the exit dose, which is then used to determine the entrance dose. The measured pixel dose maps were compared with calculated doses from TPS for both entrance and exit depth of phantom. The gamma index at 3% dose difference (DD) and 3 mm distance to agreement (DTA) resulted in an average of 97% passing for the square fields of 5, 10, 15 and 20 cm. The exit dose EPID dose distributions predicted by the algorithm were in better agreement with TPS-calculated doses than phantom entrance dose distributions.

  15. Multiple paths in complex tasks

    NASA Technical Reports Server (NTRS)

    Galanter, Eugene; Wiegand, Thomas; Mark, Gloria

    1987-01-01

    The relationship between utility judgments of subtask paths and the utility of the task as a whole was examined. The convergent validation procedure is based on the assumption that measurements of the same quantity done with different methods should covary. The utility measures of the subtasks were obtained during the performance of an aircraft flight controller navigation task. Analyses helped decide among various models of subtask utility combination, whether the utility ratings of subtask paths predict the whole tasks utility rating, and indirectly, whether judgmental models need to include the equivalent of cognitive noise.

  16. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  17. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  18. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  19. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON). PMID:23435008

  20. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  1. On Probability Domains

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2010-12-01

    Motivated by IF-probability theory (intuitionistic fuzzy), we study n-component probability domains in which each event represents a body of competing components and the range of a state represents a simplex S n of n-tuples of possible rewards-the sum of the rewards is a number from [0,1]. For n=1 we get fuzzy events, for example a bold algebra, and the corresponding fuzzy probability theory can be developed within the category ID of D-posets (equivalently effect algebras) of fuzzy sets and sequentially continuous D-homomorphisms. For n=2 we get IF-events, i.e., pairs ( μ, ν) of fuzzy sets μ, ν∈[0,1] X such that μ( x)+ ν( x)≤1 for all x∈ X, but we order our pairs (events) coordinatewise. Hence the structure of IF-events (where ( μ 1, ν 1)≤( μ 2, ν 2) whenever μ 1≤ μ 2 and ν 2≤ ν 1) is different and, consequently, the resulting IF-probability theory models a different principle. The category ID is cogenerated by I=[0,1] (objects of ID are subobjects of powers I X ), has nice properties and basic probabilistic notions and constructions are categorical. For example, states are morphisms. We introduce the category S n D cogenerated by Sn=\\{(x1,x2,ldots ,xn)in In;sum_{i=1}nxi≤ 1\\} carrying the coordinatewise partial order, difference, and sequential convergence and we show how basic probability notions can be defined within S n D.

  2. Understanding Y haplotype matching probability.

    PubMed

    Brenner, Charles H

    2014-01-01

    The Y haplotype population-genetic terrain is better explored from a fresh perspective rather than by analogy with the more familiar autosomal ideas. For haplotype matching probabilities, versus for autosomal matching probabilities, explicit attention to modelling - such as how evolution got us where we are - is much more important while consideration of population frequency is much less so. This paper explores, extends, and explains some of the concepts of "Fundamental problem of forensic mathematics - the evidential strength of a rare haplotype match". That earlier paper presented and validated a "kappa method" formula for the evidential strength when a suspect matches a previously unseen haplotype (such as a Y-haplotype) at the crime scene. Mathematical implications of the kappa method are intuitive and reasonable. Suspicions to the contrary raised in rest on elementary errors. Critical to deriving the kappa method or any sensible evidential calculation is understanding that thinking about haplotype population frequency is a red herring; the pivotal question is one of matching probability. But confusion between the two is unfortunately institutionalized in much of the forensic world. Examples make clear why (matching) probability is not (population) frequency and why uncertainty intervals on matching probabilities are merely confused thinking. Forensic matching calculations should be based on a model, on stipulated premises. The model inevitably only approximates reality, and any error in the results comes only from error in the model, the inexactness of the approximation. Sampling variation does not measure that inexactness and hence is not helpful in explaining evidence and is in fact an impediment. Alternative haplotype matching probability approaches that various authors have considered are reviewed. Some are based on no model and cannot be taken seriously. For the others, some evaluation of the models is discussed. Recent evidence supports the adequacy of

  3. Discrete Coherent State Path Integrals

    NASA Astrophysics Data System (ADS)

    Marchioro, Thomas L., II

    1990-01-01

    The quantum theory provides a fundamental understanding of the physical world; however, as the number of degrees of freedom rises, the information required to specify quantum wavefunctions grows geometrically. Because basis set expansions mirror this geometric growth, a strict practical limit on quantum mechanics as a numerical tool arises, specifically, three degrees of freedom or fewer. Recent progress has been made utilizing Feynman's Path Integral formalism to bypass this geometric growth and instead calculate time -dependent correlation functions directly. The solution of the Schrodinger equation is converted into a large dimensional (formally infinite) integration, which can then be attacked with Monte Carlo techniques. To date, work in this area has concentrated on developing sophisticated mathematical algorithms for evaluating the highly oscillatory integrands occurring in Feynman Path Integrals. In an alternative approach, this work demonstrates two formulations of quantum dynamics for which the number of mathematical operations does not scale geometrically. Both methods utilize the Coherent State basis of quantum mechanics. First, a localized coherent state basis set expansion and an approximate short time propagator are developed. Iterations of the short time propagator lead to the full quantum dynamics if the coherent state basis is sufficiently dense along the classical phase space path of the system. Second, the coherent state path integral is examined in detail. For a common class of Hamiltonians, H = p^2/2 + V( x) the path integral is reformulated from a phase space-like expression into one depending on (q,dot q). It is demonstrated that this new path integral expression contains localized damping terms which can serve as a statistical weight for Monte Carlo evaluation of the integral--a process which scales approximately linearly with the number of degrees of freedom. Corrections to the traditional coherent state path integral, inspired by a

  4. Deciphering P-T paths in metamorphic rocks involving zoned minerals using quantified maps (XMapTools software) and thermodynamics methods: Examples from the Alps and the Himalaya.

    NASA Astrophysics Data System (ADS)

    Lanari, P.; Vidal, O.; Schwartz, S.; Riel, N.; Guillot, S.; Lewin, E.

    2012-04-01

    Metamorphic rocks are made by mosaic of local thermodynamic equilibria involving minerals that grew at different temporal, pressure (P) and temperature (T) conditions. These local (in space but also in time) equilibria can be identified using micro-structural and textural criteria, but also tested using multi-equilibrium techniques. However, linking deformation with metamorphic conditions requires spatially continuous estimates of P and T conditions in least two dimensions (P-T maps), which can be superimposed to the observed structures of deformation. To this end, we have developed a new Matlab-based GUI software for microprobe X-ray map processing (XMapTools, http://www.xmaptools.com) based on the quantification method of De Andrade et al. (2006). XMapTools software includes functions for quantification processing, two chemical modules (Chem2D, Triplot3D), the structural formula functions for common minerals, and more than 50 empirical and semi-empirical geothermobarometers obtained from the literature. XMapTools software can be easily coupled with multi-equilibrium thermobarometric calculations. We will present examples of application for two natural cases involving zoned minerals. The first example is a low-grade metapelite from the paleo-subduction wedge in the Western Alps (Schistes Lustrés unit) that contains only both zoned chlorite and phengite, and also quartz. The second sample is a Himalayan eclogite from the high-pressure unit of Stak (Pakistan) with an eclogitic garnet-omphacite assemblage retrogressed into clinopyroxene-plagioclase-amphibole symplectite, and later into amphibole-biotite during the collisional event under crustal conditions. In both samples, P-T paths were recovered using multi-equilibrium, or semi-empirical geothermobarometers included in the XMapTools package. The results will be compared and discussed with pseudosections calculated with the sample bulk composition and with different local bulk rock compositions estimated with XMap

  5. Accurate Prediction of Hyperfine Coupling Constants in Muoniated and Hydrogenated Ethyl Radicals: Ab Initio Path Integral Simulation Study with Density Functional Theory Method.

    PubMed

    Yamada, Kenta; Kawashima, Yukio; Tachikawa, Masanori

    2014-05-13

    We performed ab initio path integral molecular dynamics (PIMD) simulations with a density functional theory (DFT) method to accurately predict hyperfine coupling constants (HFCCs) in the ethyl radical (CβH3-CαH2) and its Mu-substituted (muoniated) compound (CβH2Mu-CαH2). The substitution of a Mu atom, an ultralight isotope of the H atom, with larger nuclear quantum effect is expected to strongly affect the nature of the ethyl radical. The static conventional DFT calculations of CβH3-CαH2 find that the elongation of one Cβ-H bond causes a change in the shape of potential energy curve along the rotational angle via the imbalance of attractive and repulsive interactions between the methyl and methylene groups. Investigation of the methyl-group behavior including the nuclear quantum and thermal effects shows that an unbalanced CβH2Mu group with the elongated Cβ-Mu bond rotates around the Cβ-Cα bond in a muoniated ethyl radical, quite differently from the CβH3 group with the three equivalent Cβ-H bonds in the ethyl radical. These rotations couple with other molecular motions such as the methylene-group rocking motion (inversion), leading to difficulties in reproducing the corresponding barrier heights. Our PIMD simulations successfully predict the barrier heights to be close to the experimental values and provide a significant improvement in muon and proton HFCCs given by the static conventional DFT method. Further investigation reveals that the Cβ-Mu/H stretching motion, methyl-group rotation, methylene-group rocking motion, and HFCC values deeply intertwine with each other. Because these motions are different between the radicals, a proper description of the structural fluctuations reflecting the nuclear quantum and thermal effects is vital to evaluate HFCC values in theory to be comparable to the experimental ones. Accordingly, a fundamental difference in HFCC between the radicals arises from their intrinsic molecular motions at a finite temperature, in

  6. The Path of Carbon in Photosynthesis VI.

    DOE R&D Accomplishments Database

    Calvin, M.

    1949-06-30

    This paper is a compilation of the essential results of our experimental work in the determination of the path of carbon in photosynthesis. There are discussions of the dark fixation of photosynthesis and methods of separation and identification including paper chromatography and radioautography. The definition of the path of carbon in photosynthesis by the distribution of radioactivity within the compounds is described.

  7. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SEPCTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...

  8. Fractal probability laws.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2008-06-01

    We explore six classes of fractal probability laws defined on the positive half-line: Weibull, Frechét, Lévy, hyper Pareto, hyper beta, and hyper shot noise. Each of these classes admits a unique statistical power-law structure, and is uniquely associated with a certain operation of renormalization. All six classes turn out to be one-dimensional projections of underlying Poisson processes which, in turn, are the unique fixed points of Poissonian renormalizations. The first three classes correspond to linear Poissonian renormalizations and are intimately related to extreme value theory (Weibull, Frechét) and to the central limit theorem (Lévy). The other three classes correspond to nonlinear Poissonian renormalizations. Pareto's law--commonly perceived as the "universal fractal probability distribution"--is merely a special case of the hyper Pareto class.

  9. Waste Package Misload Probability

    SciTech Connect

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  10. Calculation of identity-by-descent probabilities of short chromosome segments.

    PubMed

    Tuchscherer, A; Teuscher, F; Reinsch, N

    2012-12-01

    For some purposes, identity-by-descent (IBD) probabilities for entire chromosome segments are required. Making use of pedigree information, length of the segment and the assumption of no crossing-over, a generalization of a previously published graph theory oriented algorithm accounting for nonzero IBD of common ancestors is given, which can be viewed as method of path coefficients for entire chromosome segments. Furthermore, rules for setting up a gametic version of a segmental IBD matrix are presented. Results from the generalized graph theory oriented method, the gametic segmental IBD matrix and the segmental IBD matrix for individuals are identical.

  11. The Objective Borderline Method (OBM): A Probability-Based Model for Setting up an Objective Pass/Fail Cut-Off Score in Medical Programme Assessments

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-01-01

    The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…

  12. The Objective Borderline method (OBM): a probability-based model for setting up an objective pass/fail cut-off score in medical programme assessments.

    PubMed

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-05-01

    The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.

  13. Squeezed states and path integrals

    NASA Technical Reports Server (NTRS)

    Daubechies, Ingrid; Klauder, John R.

    1992-01-01

    The continuous-time regularization scheme for defining phase-space path integrals is briefly reviewed as a method to define a quantization procedure that is completely covariant under all smooth canonical coordinate transformations. As an illustration of this method, a limited set of transformations is discussed that have an image in the set of the usual squeezed states. It is noteworthy that even this limited set of transformations offers new possibilities for stationary phase approximations to quantum mechanical propagators.

  14. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  15. Integrated statistical modelling of spatial landslide probability

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  16. Probability, Information and Statistical Physics

    NASA Astrophysics Data System (ADS)

    Kuzemsky, A. L.

    2016-03-01

    In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.

  17. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  18. Probability Issues in without Replacement Sampling

    ERIC Educational Resources Information Center

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  19. Single-Path Sigma from a Huge Dataset in Taiwan

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Hsuan; Lee, Chyi-Tyi

    2014-05-01

    Ground-motion variability, which was used in the probabilistic seismic hazard analysis (PSHA) in computing annual exceedence probability, is composed of random variability (aleatory uncertainty) and model uncertainty (epistemic uncertainty). Finding random variability of ground motions has become an important issue in PSHA, and only the random variability can be used in deriving the annual exceedence probability of ground-motion. Epistemic uncertainty will be put in the logic tree to estimate the total uncertainty of ground-motion. In the present study, we used about 18,859 records from 158 shallow earthquakes (Mw > 3.0, focal depth ≤ 35 km, each station has at least 20 records) form the Taiwan Strong-Motion Instrumentation Program (TSMIP) network to analyse the random variability of ground-motion. First, a new ground-motion attenuation model was established by using this huge data set. Second, the residuals from the median attenuation were analysed by direct observation on inter-event variability and site-specific variability. Finally, the single-path variability was found by a moving-window method on either single-earthquake residuals or single-station residuals. A variogram method was also used to find minimum variability for intra-event residuals and inter-event residuals, respectively. Results reveal that 90% of the single-path sigma σSP are ranging from 0.219 to 0.254 (ln unit) and are 58% to 64% smaller than the total sigma (σT =0.601). The single-site sigma (σSS) are also 39%-43% smaller. If we use only random variability (single-path sigma) in PSHA, then the resultant hazard level would be 28% and 25% lower than the traditional one (using total sigma) in 475-year and in 2475-year return period, respectively, in Taipei.

  20. Dynamic versus static fission paths with realistic interactions

    NASA Astrophysics Data System (ADS)

    Giuliani, Samuel A.; Robledo, Luis M.; Rodríguez-Guzmán, R.

    2014-11-01

    The properties of dynamic (least action) fission paths are analyzed and compared to the ones of the more traditional static (least energy) paths. Both the Barcelona-Catania-Paris-Madrid and Gogny D1M energy density functionals are used in the calculation of the Hartree-Fock-Bogoliubov (HFB) constrained configurations providing the potential energy and collective inertias. The action is computed as in the Wentzel-Kramers-Brillouin method. A full variational search of the least-action path over the complete variational space of HFB wave functions is cumbersome and probably unnecessary if the relevant degrees of freedom are identified. In this paper, we consider the particle number fluctuation degree of freedom that explores the amount of pairing correlations in the wave function. For a given shape, the minimum action can be up to a factor of 3 smaller than the action computed for the minimum energy state with the same shape. The impact of this reduction on the lifetimes is enormous and dramatically improves the agreement with experimental data in the few examples considered.

  1. Mapped interpolation scheme for single-point energy corrections in reaction rate calculations and a critical evaluation of dual-level reaction path dynamics methods

    SciTech Connect

    Chuang, Y.Y.; Truhlar, D.G.; Corchado, J.C.

    1999-02-25

    Three procedures for incorporating higher level electronic structure data into reaction path dynamics calculations are tested. In one procedure, variational transition state theory with interpolated single-point energies, which is denoted VTST-ISPE, a few extra energies calculated with a higher level theory along the lower level reaction path are used to correct the classical energetic profile of the reaction. In the second procedure, denoted variational transition state theory with interpolated optimized corrections (VTST-IOC), which the authors introduced earlier, higher level corrections to energies, frequencies, and moments of inertia are based on stationary-point geometries reoptimized at a higher level than the reaction path was calculated. The third procedure, called interpolated optimized energies (IOE), is like IOC except it omits the frequency correction. Three hydrogen-transfer reactions, CH{sub 3} + H{prime}H {r_arrow} CH{sub 3}H{prime} + H (R1), OH + H{prime}H {r_arrow} HOH{prime} + H (R2), and OH + H{prime}CH{sub 3} {r_arrow} HOH{prime} + CH{sub 3} (R3), are used to test and validate the procedures by comparing their predictions to the reaction rate evaluated with a full variational transition state theory calculation including multidimensional tunneling (VTST/MT) at the higher level. The authors present a very efficient scheme for carrying out VTST-ISPE calculations, which are popular due to their lower computational cost. They also show, on the basis of calculations of the reactions R1--R3 with eight pairs of higher and lower levels, that VTST-IOC with higher level data only at stationary points is a more reliable dual-level procedure than VTST-ISPE with higher level energies all along the reaction path. Although the frequencies along the reaction path are not corrected in the IOE scheme, the results are still better than those from VTST-ISPE; this indicates the importance of optimizing the geometry at the highest possible level.

  2. Quantifying model uncertainty in dynamical systems driven by non-Gaussian Lévy stable noise with observations on mean exit time or escape probability

    NASA Astrophysics Data System (ADS)

    Gao, Ting; Duan, Jinqiao

    2016-10-01

    Complex systems are sometimes subject to non-Gaussian α-stable Lévy fluctuations. A new method is devised to estimate the uncertain parameter α and other system parameters, using observations on mean exit time or escape probability for the system evolution. It is based on solving an inverse problem for a deterministic, nonlocal partial differential equation via numerical optimization. The existing methods for estimating parameters require observations on system state sample paths for long time periods or probability densities at large spatial ranges. The method proposed here, instead, requires observations on mean exit time or escape probability only for an arbitrarily small spatial domain. This new method is beneficial to systems for which mean exit time or escape probability is feasible to observe.

  3. Path similarity skeleton graph matching.

    PubMed

    Bai, Xiang; Latecki, Longin Jan

    2008-07-01

    This paper presents a novel framework to for shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the shortest paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we completely ignore the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of shortest paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with Discrete Curve Evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and occlusion.

  4. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  5. Probability, statistics, and computational science.

    PubMed

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters.

  6. Analysis and Monte Carlo simulation of near-terminal aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1982-01-01

    The flight paths of arriving and departing aircraft at an airport are stochastically represented. Radar data of the aircraft movements are used to decompose the flight paths into linear and curvilinear segments. Variables which describe the segments are derived, and the best fitting probability distributions of the variables, based on a sample of flight paths, are found. Conversely, given information on the probability distribution of the variables, generation of a random sample of flight paths in a Monte Carlo simulation is discussed. Actual flight paths at Dulles International Airport are analyzed and simulated.

  7. The Logic Behind Feynman's Paths

    NASA Astrophysics Data System (ADS)

    García Álvarez, Edgardo T.

    The classical notions of continuity and mechanical causality are left in order to reformulate the Quantum Theory starting from two principles: (I) the intrinsic randomness of quantum process at microphysical level, (II) the projective representations of symmetries of the system. The second principle determines the geometry and then a new logic for describing the history of events (Feynman's paths) that modifies the rules of classical probabilistic calculus. The notion of classical trajectory is replaced by a history of spontaneous, random and discontinuous events. So the theory is reduced to determining the probability distribution for such histories accordingly with the symmetries of the system. The representation of the logic in terms of amplitudes leads to Feynman rules and, alternatively, its representation in terms of projectors results in the Schwinger trace formula.

  8. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.

  9. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities. PMID:27570097

  10. Tortuous path chemical preconcentrator

    DOEpatents

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  11. A Path to Discovery

    ERIC Educational Resources Information Center

    Stegemoller, William; Stegemoller, Rebecca

    2004-01-01

    The path taken and the turns made as a turtle traces a polygon are examined to discover an important theorem in geometry. A unique tool, the Angle Adder, is implemented in the investigation. (Contains 9 figures.)

  12. An objective method to determine the probability distribution of the minimum apparent age of a sample of radio-isotopic dates

    NASA Astrophysics Data System (ADS)

    Ickert, R. B.; Mundil, R.

    2012-12-01

    Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data

  13. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy

    PubMed Central

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-01-01

    Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717

  14. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGES

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; Neary, Vincent S.

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This papermore » develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  15. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  16. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023

  17. Probability distributions for magnetotellurics

    SciTech Connect

    Stodt, John A.

    1982-11-01

    Estimates of the magnetotelluric transfer functions can be viewed as ratios of two complex random variables. It is assumed that the numerator and denominator are governed approximately by a joint complex normal distribution. Under this assumption, probability distributions are obtained for the magnitude, squared magnitude, logarithm of the squared magnitude, and the phase of the estimates. Normal approximations to the distributions are obtained by calculating mean values and variances from error propagation, and the distributions are plotted with their normal approximations for different percentage errors in the numerator and denominator of the estimates, ranging from 10% to 75%. The distribution of the phase is approximated well by a normal distribution for the range of errors considered, while the distribution of the logarithm of the squared magnitude is approximated by a normal distribution for a much larger range of errors than is the distribution of the squared magnitude. The distribution of the squared magnitude is most sensitive to the presence of noise in the denominator of the estimate, in which case the true distribution deviates significantly from normal behavior as the percentage errors exceed 10%. In contrast, the normal approximation to the distribution of the logarithm of the magnitude is useful for errors as large as 75%.

  18. Nonholonomic catheter path reconstruction using electromagnetic tracking

    NASA Astrophysics Data System (ADS)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  19. Path optimization for oil probe

    NASA Astrophysics Data System (ADS)

    Smith, O'Neil; Rahmes, Mark; Blue, Mark; Peter, Adrian

    2014-05-01

    We discuss a robust method for optimal oil probe path planning inspired by medical imaging. Horizontal wells require three-dimensional steering made possible by the rotary steerable capabilities of the system, which allows the hole to intersect multiple target shale gas zones. Horizontal "legs" can be over a mile long; the longer the exposure length, the more oil and natural gas is drained and the faster it can flow. More oil and natural gas can be produced with fewer wells and less surface disturbance. Horizontal drilling can help producers tap oil and natural gas deposits under surface areas where a vertical well cannot be drilled, such as under developed or environmentally sensitive areas. Drilling creates well paths which have multiple twists and turns to try to hit multiple accumulations from a single well location. Our algorithm can be used to augment current state of the art methods. Our goal is to obtain a 3D path with nodes describing the optimal route to the destination. This algorithm works with BIG data and saves cost in planning for probe insertion. Our solution may be able to help increase the energy extracted vs. input energy.

  20. Decision from Models: Generalizing Probability Information to Novel Tasks

    PubMed Central

    Zhang, Hang; Paily, Jacienta T.; Maloney, Laurence T.

    2014-01-01

    We investigate a new type of decision under risk where—to succeed—participants must generalize their experience in one set of tasks to a novel set of tasks. We asked participants to trade distance for reward in a virtual minefield where each successive step incurred the same fixed probability of failure (referred to as hazard). With constant hazard, the probability of success (the survival function) decreases exponentially with path length. On each trial, participants chose between a shorter path with smaller reward and a longer (more dangerous) path with larger reward. They received feedback in 160 training trials: encountering a mine along their chosen path resulted in zero reward and successful completion of the path led to the reward associated with the path chosen. They then completed 600 no-feedback test trials with novel combinations of path length and rewards. To maximize expected gain, participants had to learn the correct exponential model in training and generalize it to the test conditions. We compared how participants discounted reward with increasing path length to the predictions of nine choice models including the correct exponential model. The choices of a majority of the participants were best accounted for by a model of the correct exponential form although with marked overestimation of the hazard rate. The decision-from-models paradigm differs from experience-based decision paradigms such as decision-from-sampling in the importance assigned to generalizing experience-based information to novel tasks. The task itself is representative of everyday tasks involving repeated decisions in stochastically invariant environments. PMID:25621287

  1. Assessment of the probability of contaminating Mars

    NASA Technical Reports Server (NTRS)

    Judd, B. R.; North, D. W.; Pezier, J. P.

    1974-01-01

    New methodology is proposed to assess the probability that the planet Mars will by biologically contaminated by terrestrial microorganisms aboard a spacecraft. Present NASA methods are based on the Sagan-Coleman formula, which states that the probability of contamination is the product of the expected microbial release and a probability of growth. The proposed new methodology extends the Sagan-Coleman approach to permit utilization of detailed information on microbial characteristics, the lethality of release and transport mechanisms, and of other information about the Martian environment. Three different types of microbial release are distinguished in the model for assessing the probability of contamination. The number of viable microbes released by each mechanism depends on the bio-burden in various locations on the spacecraft and on whether the spacecraft landing is accomplished according to plan. For each of the three release mechanisms a probability of growth is computed, using a model for transport into an environment suited to microbial growth.

  2. Assessment of Approximate Coupled-Cluster and Algebraic-Diagrammatic-Construction Methods for Ground- and Excited-State Reaction Paths and the Conical-Intersection Seam of a Retinal-Chromophore Model.

    PubMed

    Tuna, Deniz; Lefrancois, Daniel; Wolański, Łukasz; Gozem, Samer; Schapiro, Igor; Andruniów, Tadeusz; Dreuw, Andreas; Olivucci, Massimo

    2015-12-01

    As a minimal model of the chromophore of rhodopsin proteins, the penta-2,4-dieniminium cation (PSB3) poses a challenging test system for the assessment of electronic-structure methods for the exploration of ground- and excited-state potential-energy surfaces, the topography of conical intersections, and the dimensionality (topology) of the branching space. Herein, we report on the performance of the approximate linear-response coupled-cluster method of second order (CC2) and the algebraic-diagrammatic-construction scheme of the polarization propagator of second and third orders (ADC(2) and ADC(3)). For the ADC(2) method, we considered both the strict and extended variants (ADC(2)-s and ADC(2)-x). For both CC2 and ADC methods, we also tested the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) variants. We have explored several ground- and excited-state reaction paths, a circular path centered around the S1/S0 surface crossing, and a 2D scan of the potential-energy surfaces along the branching space. We find that the CC2 and ADC methods yield a different dimensionality of the intersection space. While the ADC methods yield a linear intersection topology, we find a conical intersection topology for the CC2 method. We present computational evidence showing that the linear-response CC2 method yields a surface crossing between the reference state and the first response state featuring characteristics that are expected for a true conical intersection. Finally, we test the performance of these methods for the approximate geometry optimization of the S1/S0 minimum-energy conical intersection and compare the geometries with available data from multireference methods. The present study provides new insight into the performance of linear-response CC2 and polarization-propagator ADC methods for molecular electronic spectroscopy and applications in computational photochemistry. PMID:26642989

  3. Assessment of Approximate Coupled-Cluster and Algebraic-Diagrammatic-Construction Methods for Ground- and Excited-State Reaction Paths and the Conical-Intersection Seam of a Retinal-Chromophore Model.

    PubMed

    Tuna, Deniz; Lefrancois, Daniel; Wolański, Łukasz; Gozem, Samer; Schapiro, Igor; Andruniów, Tadeusz; Dreuw, Andreas; Olivucci, Massimo

    2015-12-01

    As a minimal model of the chromophore of rhodopsin proteins, the penta-2,4-dieniminium cation (PSB3) poses a challenging test system for the assessment of electronic-structure methods for the exploration of ground- and excited-state potential-energy surfaces, the topography of conical intersections, and the dimensionality (topology) of the branching space. Herein, we report on the performance of the approximate linear-response coupled-cluster method of second order (CC2) and the algebraic-diagrammatic-construction scheme of the polarization propagator of second and third orders (ADC(2) and ADC(3)). For the ADC(2) method, we considered both the strict and extended variants (ADC(2)-s and ADC(2)-x). For both CC2 and ADC methods, we also tested the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) variants. We have explored several ground- and excited-state reaction paths, a circular path centered around the S1/S0 surface crossing, and a 2D scan of the potential-energy surfaces along the branching space. We find that the CC2 and ADC methods yield a different dimensionality of the intersection space. While the ADC methods yield a linear intersection topology, we find a conical intersection topology for the CC2 method. We present computational evidence showing that the linear-response CC2 method yields a surface crossing between the reference state and the first response state featuring characteristics that are expected for a true conical intersection. Finally, we test the performance of these methods for the approximate geometry optimization of the S1/S0 minimum-energy conical intersection and compare the geometries with available data from multireference methods. The present study provides new insight into the performance of linear-response CC2 and polarization-propagator ADC methods for molecular electronic spectroscopy and applications in computational photochemistry.

  4. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  5. Negative probabilities and information gain in weak measurements

    NASA Astrophysics Data System (ADS)

    Zhu, Xuanmin; Wei, Qun; Liu, Quanhui; Wu, Shengjun

    2013-11-01

    We study the outcomes in a general measurement with postselection, and derive upper bounds for the pointer readings in weak measurement. The probabilities inferred from weak measurements change along with the coupling strength; and the true probabilities can be obtained when the coupling is strong enough. By calculating the information gain of the measuring device about which path the particles pass through, we show that the “negative probabilities” only emerge for cases when the information gain is little due to very weak coupling between the measuring device and the particles. When the coupling strength increases, we can unambiguously determine whether a particle passes through a given path every time, hence the average shifts always represent true probabilities, and the strange “negatives probabilities” disappear.

  6. A Path Algorithm for Constrained Estimation.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online.

  7. Sampling diffusive transition paths

    SciTech Connect

    F. Miller III, Thomas; Predescu, Cristian

    2006-10-12

    We address the problem of sampling double-ended diffusive paths. The ensemble of paths is expressed using a symmetric version of the Onsager-Machlup formula, which only requires evaluation of the force field and which, upon direct time discretization, gives rise to a symmetric integrator that is accurate to second order. Efficiently sampling this ensemble requires avoiding the well-known stiffness problem associated with sampling infinitesimal Brownian increments of the path, as well as a different type of stiffness associated with sampling the coarse features of long paths. The fine-features sampling stiffness is eliminated with the use of the fast sampling algorithm (FSA), and the coarse-feature sampling stiffness is avoided by introducing the sliding and sampling (S&S) algorithm. A key feature of the S&S algorithm is that it enables massively parallel computers to sample diffusive trajectories that are long in time. We use the algorithm to sample the transition path ensemble for the structural interconversion of the 38-atom Lennard-Jones cluster at low temperature.

  8. Critical paths: maximizing patient care coordination.

    PubMed

    Spath, P L

    1995-01-01

    1. With today's emphasis on horizontal and vertical integration of patient care services and the new initiatives prompted by these challenges, OR nurses are considering new methods for managing the perioperative period. One such method is the critical path. 2. A critical path defines an optimal sequencing and timing of interventions by physicians, nurses, and other staff members for a particular diagnosis or procedure, designed to better use resources, maximize quality of care, and minimize delays. 3. Hospitals implementing path-based patient care have reported cost reductions and improved team-work. Critical paths have been shown to reduce patient care costs by improving hospital efficiency, not merely by reducing physician practice variations.

  9. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  10. The Probability of Causal Conditionals

    ERIC Educational Resources Information Center

    Over, David E.; Hadjichristidis, Constantinos; Evans, Jonathan St. B. T.; Handley, Simon J.; Sloman, Steven A.

    2007-01-01

    Conditionals in natural language are central to reasoning and decision making. A theoretical proposal called the Ramsey test implies the conditional probability hypothesis: that the subjective probability of a natural language conditional, P(if p then q), is the conditional subjective probability, P(q [such that] p). We report three experiments on…

  11. Counting paths in digraphs

    SciTech Connect

    Sullivan, Blair D; Seymour, Dr. Paul Douglas

    2010-01-01

    Say a digraph is k-free if it has no directed cycles of length at most k, for k {element_of} Z{sup +}. Thomasse conjectured that the number of induced 3-vertex directed paths in a simple 2-free digraph on n vertices is at most (n-1)n(n+1)/15. We present an unpublished result of Bondy proving there are at most 2n{sup 3}/25 such paths, and prove that for the class of circular interval digraphs, an upper bound of n{sup 3}/16 holds. We also study the problem of bounding the number of (non-induced) 4-vertex paths in 3-free digraphs. We show an upper bound of 4n{sup 4}/75 using Bondy's result for Thomasse's conjecture.

  12. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images. PMID:9608471

  13. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  14. In search of a statistical probability model for petroleum-resource assessment : a critique of the probabilistic significance of certain concepts and methods used in petroleum-resource assessment : to that end, a probabilistic model is sketched

    USGS Publications Warehouse

    Grossling, Bernardo F.

    1975-01-01

    Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then

  15. Coherence-path duality relations for N paths

    NASA Astrophysics Data System (ADS)

    Hillery, Mark; Bagan, Emilio; Bergou, Janos; Cottrell, Seth

    2016-05-01

    For an interferometer with two paths, there is a relation between the information about which path the particle took and the visibility of the interference pattern at the output. The more path information we have, the smaller the visibility, and vice versa. We generalize this relation to a multi-path interferometer, and we substitute two recently defined measures of quantum coherence for the visibility, which results in two duality relations. The path information is provided by attaching a detector to each path. In the first relation, which uses an l1 measure of coherence, the path information is obtained by applying the minimum-error state discrimination procedure to the detector states. In the second, which employs an entropic measure of coherence, the path information is the mutual information between the detector states and the result of measuring them. Both approaches are quantitative versions of complementarity for N-path interferometers. Support provided by the John Templeton Foundation.

  16. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  17. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  18. Improved initial guess for minimum energy path calculations.

    PubMed

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt; Jónsson, Hannes

    2014-06-01

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  19. Improved initial guess for minimum energy path calculations

    SciTech Connect

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt

    2014-06-07

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  20. Gas path seal

    NASA Technical Reports Server (NTRS)

    Bill, R. C.; Johnson, R. D. (Inventor)

    1979-01-01

    A gas path seal suitable for use with a turbine engine or compressor is described. A shroud wearable or abradable by the abrasion of the rotor blades of the turbine or compressor shrouds the rotor bades. A compliant backing surrounds the shroud. The backing is a yieldingly deformable porous material covered with a thin ductile layer. A mounting fixture surrounds the backing.

  1. An Unplanned Path

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  2. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma.

  3. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma. PMID:27477933

  4. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative

  5. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  6. Accurate free energy calculation along optimized paths.

    PubMed

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  7. Communication path for extreme environments

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C. (Inventor); Betts, Bradley J. (Inventor)

    2010-01-01

    Methods and systems for using one or more radio frequency identification devices (RFIDs), or other suitable signal transmitters and/or receivers, to provide a sensor information communication path, to provide location and/or spatial orientation information for an emergency service worker (ESW), to provide an ESW escape route, to indicate a direction from an ESW to an ES appliance, to provide updated information on a region or structure that presents an extreme environment (fire, hazardous fluid leak, underwater, nuclear, etc.) in which an ESW works, and to provide accumulated thermal load or thermal breakdown information on one or more locations in the region.

  8. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  9. Paths of Target Seeking Missiles in Two Dimensions

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E.

    1946-01-01

    Parameters that enter into equation of trajectory of a missile are discussed. Investigation is made of normal pursuit, of constant, proportional, and line--of-sight methods of navigation employing target seeker, and of deriving corresponding pursuit paths. Pursuit paths obtained under similar conditions for different methods are compared. Proportional navigation is concluded to be best method for using target seeker installed in missile.

  10. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  11. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  12. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  13. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  14. Probability Forecasting Using Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Duncan, M.; Frisbee, J.; Wysack, J.

    2014-09-01

    Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a

  15. Four paths of competition

    SciTech Connect

    Studness, C.M.

    1995-05-01

    The financial community`s focus on utility competition has been riveted on the proceedings now in progress at state regulatory commissions. The fear that something immediately damaging will come out of these proceedings seems to have diminished in recent months, and the stock market has reacted favorably. However, regulatory developments are only one of four paths leading to competition; the others are the marketplace, the legislatures, and the courts. Each could play a critical role in the emergence of competition.

  16. Path optimization with limited sensing ability

    SciTech Connect

    Kang, Sung Ha Kim, Seong Jun Zhou, Haomin

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducing its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.

  17. PATHS groundwater hydrologic model

    SciTech Connect

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  18. Dynamic Task Assignment and Path Planning of Multi-AUV System Based on an Improved Self-Organizing Map and Velocity Synthesis Method in Three-Dimensional Underwater Workspace.

    PubMed

    Zhu, Daqi; Huang, Huan; Yang, S X

    2013-04-01

    For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper. PMID:22949070

  19. Dynamic Task Assignment and Path Planning of Multi-AUV System Based on an Improved Self-Organizing Map and Velocity Synthesis Method in Three-Dimensional Underwater Workspace.

    PubMed

    Zhu, Daqi; Huang, Huan; Yang, S X

    2013-04-01

    For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper.

  20. Probability Interpretation of Quantum Mechanics.

    ERIC Educational Resources Information Center

    Newton, Roger G.

    1980-01-01

    This paper draws attention to the frequency meaning of the probability concept and its implications for quantum mechanics. It emphasizes that the very meaning of probability implies the ensemble interpretation of both pure and mixed states. As a result some of the "paradoxical" aspects of quantum mechanics lose their counterintuitive character.…