Science.gov

Sample records for path probability method

  1. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non

  2. The application of path integral for log return probability calculation

    NASA Astrophysics Data System (ADS)

    Palupi, D. S.; Hermanto, A.; Tenderlilin, E.; Rosyid, M. F.

    2014-10-01

    Log return probability has been calculated using path integral method. The stock price is assumed obeying the stochastic differential equation of a geometric Brownian motion and the volatility is assumed following Ornstein Uhlenbeck process. The stochastic differential equation of stock price and volatility lead to Fokker-Plank equation. The Fokker-Plank equation is solved using path integral method. Distribution of log return can be used to take the valuation ln return stock.

  3. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  4. Pattern formation, logistics, and maximum path probability

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  5. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: higher order theory based on the Bethe-Peierls and path probability method approximations.

    PubMed

    Edison, John R; Monson, Peter A

    2014-07-14

    Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.

  6. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: Higher order theory based on the Bethe-Peierls and path probability method approximations

    SciTech Connect

    Edison, John R.; Monson, Peter A.

    2014-07-14

    Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.

  7. The Path-of-Probability Algorithm for Steering and Feedback Control of Flexible Needles

    PubMed Central

    Park, Wooram; Wang, Yunfeng; Chirikjian, Gregory S.

    2010-01-01

    In this paper we develop a new framework for path planning of flexible needles with bevel tips. Based on a stochastic model of needle steering, the probability density function for the needle tip pose is approximated as a Gaussian. The means and covariances are estimated using an error propagation algorithm which has second order accuracy. Then we adapt the path-of-probability (POP) algorithm to path planning of flexible needles with bevel tips. We demonstrate how our planning algorithm can be used for feedback control of flexible needles. We also derive a closed-form solution for the port placement problem for finding good insertion locations for flexible needles in the case when there are no obstacles. Furthermore, we propose a new method using reference splines with the POP algorithm to solve the path planning problem for flexible needles in more general cases that include obstacles. PMID:21151708

  8. Looping probabilities of elastic chains: a path integral approach.

    PubMed

    Cotta-Ramusino, Ludovica; Maddocks, John H

    2010-11-01

    We consider an elastic chain at thermodynamic equilibrium with a heat bath, and derive an approximation to the probability density function, or pdf, governing the relative location and orientation of the two ends of the chain. Our motivation is to exploit continuum mechanics models for the computation of DNA looping probabilities, but here we focus on explaining the novel analytical aspects in the derivation of our approximation formula. Accordingly, and for simplicity, the current presentation is limited to the illustrative case of planar configurations. A path integral formalism is adopted, and, in the standard way, the first approximation to the looping pdf is obtained from a minimal energy configuration satisfying prescribed end conditions. Then we compute an additional factor in the pdf which encompasses the contributions of quadratic fluctuations about the minimum energy configuration along with a simultaneous evaluation of the partition function. The original aspects of our analysis are twofold. First, the quadratic Lagrangian describing the fluctuations has cross-terms that are linear in first derivatives. This, seemingly small, deviation from the structure of standard path integral examples complicates the necessary analysis significantly. Nevertheless, after a nonlinear change of variable of Riccati type, we show that the correction factor to the pdf can still be evaluated in terms of the solution to an initial value problem for the linear system of Jacobi ordinary differential equations associated with the second variation. The second novel aspect of our analysis is that we show that the Hamiltonian form of these linear Jacobi equations still provides the appropriate correction term in the inextensible, unshearable limit that is commonly adopted in polymer physics models of, e.g. DNA. Prior analyses of the inextensible case have had to introduce nonlinear and nonlocal integral constraints to express conditions on the relative displacement of the end

  9. Perturbative Methods in Path Integration

    NASA Astrophysics Data System (ADS)

    Johnson-Freyd, Theodore Paul

    This dissertation addresses a number of related questions concerning perturbative "path" integrals. Perturbative methods are one of the few successful ways physicists have worked with (or even defined) these infinite-dimensional integrals, and it is important as mathematicians to check that they are correct. Chapter 0 provides a detailed introduction. We take a classical approach to path integrals in Chapter 1. Following standard arguments, we posit a Feynman-diagrammatic description of the asymptotics of the time-evolution operator for the quantum mechanics of a charged particle moving nonrelativistically through a curved manifold under the influence of an external electromagnetic field. We check that our sum of Feynman diagrams has all desired properties: it is coordinate-independent and well-defined without ultraviolet divergences, it satisfies the correct composition law, and it satisfies Schrodinger's equation thought of as a boundary-value problem in PDE. Path integrals in quantum mechanics and elsewhere in quantum field theory are almost always of the shape ∫ f es for some functions f (the "observable") and s (the "action"). In Chapter 2 we step back to analyze integrals of this type more generally. Integration by parts provides algebraic relations between the values of ∫ (-) es for different inputs, which can be packaged into a Batalin--Vilkovisky-type chain complex. Using some simple homological perturbation theory, we study the version of this complex that arises when f and s are taken to be polynomial functions, and power series are banished. We find that in such cases, the entire scheme-theoretic critical locus (complex points included) of s plays an important role, and that one can uniformly (but noncanonically) integrate out in a purely algebraic way the contributions to the integral from all "higher modes," reducing ∫ f es to an integral over the critical locus. This may help explain the presence of analytic continuation in questions like the

  10. Imprecise Probability Methods for Weapons UQ

    SciTech Connect

    Picard, Richard Roy; Vander Wiel, Scott Alan

    2016-05-13

    Building on recent work in uncertainty quanti cation, we examine the use of imprecise probability methods to better characterize expert knowledge and to improve on misleading aspects of Bayesian analysis with informative prior distributions. Quantitative approaches to incorporate uncertainties in weapons certi cation are subject to rigorous external peer review, and in this regard, certain imprecise probability methods are well established in the literature and attractive. These methods are illustrated using experimental data from LANL detonator impact testing.

  11. Continuity equation for probability as a requirement of inference over paths

    NASA Astrophysics Data System (ADS)

    González, Diego; Díaz, Daniela; Davis, Sergio

    2016-09-01

    Local conservation of probability, expressed as the continuity equation, is a central feature of non-equilibrium Statistical Mechanics. In the existing literature, the continuity equation is always motivated by heuristic arguments with no derivation from first principles. In this work we show that the continuity equation is a logical consequence of the laws of probability and the application of the formalism of inference over paths for dynamical systems. That is, the simple postulate that a system moves continuously through time following paths implies the continuity equation. The translation between the language of dynamical paths to the usual representation in terms of probability densities of states is performed by means of an identity derived from Bayes' theorem. The formalism presented here is valid independently of the nature of the system studied: it is applicable to physical systems and also to more abstract dynamics such as financial indicators, population dynamics in ecology among others.

  12. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  13. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  14. ''Latest Capabilities of Pov-Ray Ricochet Flight Path Analysis & Impact Probability Prediction Software''

    SciTech Connect

    Price, D.E.; Brereton, S.; Newton, M.; Moore, B.; Muirhead, D.; Pastrnak, J.; Prokosch, D.; Spence, B.; Towle, R.

    2000-09-05

    POV-Ray Ricochet Tracker is a freeware computer code developed to analyze high-speed fragment ricochet trajectory paths in complex 3-D areas such as explosives tiring chambers, facility equipment rooms, or shipboard Command and Control Centers. The code analyzes as many as millions of individual fragment trajectory paths in three dimensions and tracks these trajectory paths for up to four bounces through the three-dimensional model. It allows determination of the probabilities of hitting any designated areas or objects in the model. It creates renderings of any ricochet flight paths of interest in photo realistic renderings of the 3-D model. POV-Ray Ricochet Tracker is a customized version of the Persistence of Vision{trademark} Ray-Tracer (POV-Ray{trademark}) version 3.02 code for the Macintosh{trademark} Operating System (MacOS{trademark}). POV-Ray is a third generation graphics engine that creates three-dimensional, very high quality (photo-realistic) images with realistic reflections, shading, textures, perspective, and other effects using a rendering technique called ray-tracing. It reads a text tile that describes the objects, lighting, and camera location in a scene and generates an image of that scene from the viewpoint of the camera. More information about POV-Ray, including the executable and source code, may be found at http://www.povray.org. The customized code (POV-Ray Shrapnel Tracker, V3.02-Custom Build 2) generates individual fragment trajectory paths at any desired angle intervals in three dimensions. The code tracks these trajectory paths through any complex three-dimensional space, and outputs detailed data for each ray as requested by the user. The output may include trajectory source location, initial direction of each trajectory, vector data for each bounce point, and any impacts with designated model target surfaces during any trajectory segment (direct path or reflected paths). This allows determination of the three-dimensional trajectory of

  15. The path exchange method for hybrid LCA.

    PubMed

    Lenzen, Manfred; Crawford, Robert

    2009-11-01

    Hybrid techniques for Life-Cycle Assessment (LCA) provide a way of combining the accuracy of process analysis and the completeness of input-output analysis. A number of methods have been suggested to implement a hybrid LCA in practice, with the main challenge being the integration of specific process data with an overarching input-output system. In this work we present a new hybrid LCA method which works at the finest input-output level of detail: structural paths. This new Path Exchange method avoids double-counting and system disturbance just as previous hybrid LCA methods, but instead of a large LCA database it requires only a minimum of external information on those structural paths that are to be represented by process data.

  16. Effect of optical turbulence along a downward slant path on probability of laser hazard

    NASA Astrophysics Data System (ADS)

    Gustafsson, K. Ove S.

    2016-10-01

    The importance of the optical turbulence effect along a slant path downward on probability of exceeding the maximum permissible exposure level (MPE) from a laser is discussed. The optical turbulence is generated by fluctuations (variations) in refractive index of the atmosphere. These fluctuations are caused in turn by changes in atmospheric temperature and humidity. The structure function of refractive index, Cn2, is the single most important parameter in the description of turbulence effects on the propagation of electromagnetic radiation. In the boundary layer, the lowest part of the atmosphere where the ground directly influence the atmosphere, is the variation of Cn2 in Sweden between about 10-17 and 10-12 m-2/3, see Bergström et al. [5]. Along a horizontal path is the Cn 2 often assumed to be constant. The variation of the Cn2 along a slant path is described by the Tatarski model as function of height to the power of -4/3 or -2/3, depending on day or night conditions. The hazard of laser damage of eye is calculated for a long slant path downward. The probability of exceeding the maximum permissible exposure (MPE) level is given as a function of distance in comparison with nominal ocular hazard distance (NOHD) for adopted levels of turbulence. Furthermore, calculations are carried out for a laser pointer or a designator laser from a high altitude and long distance down to a ground target. The used example shows that there is an 10% risk of exceeding the MPE at a distance 2 km beyond the NOHD, in this example 48 km, due to turbulence level of 5·10-15 m-2/3 at ground height. The turbulence influence on a laser beam along horizontal path on NOHD have been shown before by Zilberman et al. [4].

  17. Do-It-Yourself Critical Path Method.

    ERIC Educational Resources Information Center

    Morris, Edward P., Jr.

    This report describes the critical path method (CPM), a system for planning and scheduling work to get the best time-cost combination for any particular job. With the use of diagrams, the report describes how CPM works on a step-by-step basis. CPM uses a network to show which parts of a job must be done and how they would eventually fit together…

  18. Investigating rare events with nonequilibrium work measurements. I. Nonequilibrium transition path probabilities.

    PubMed

    Moradi, Mahmoud; Sagui, Celeste; Roland, Christopher

    2014-01-21

    We have developed a formalism for investigating transition pathways and transition probabilities for rare events in biomolecular systems. In this paper, we set the theoretical framework for employing nonequilibrium work relations to estimate the relative reaction rates associated with different classes of transition pathways. Particularly, we derive an extension of Crook's transient fluctuation theorem, which relates the relative transition rates of driven systems in the forward and reverse directions, and allows for the calculation of these relative rates using work measurements (e.g., in Steered Molecular Dynamics). The formalism presented here can be combined with Transition Path Theory to relate the equilibrium and driven transition rates. The usefulness of this framework is illustrated by means of a Gaussian model and a driven proline dimer.

  19. Examining the Probability of Identification for Gifted Programs for Students in Georgia Elementary Schools: A Multilevel Path Analysis Study

    ERIC Educational Resources Information Center

    McBee, Matthew

    2010-01-01

    This study focused on the analysis of a large-scale data set (N = 326,352) collected by the Georgia Department of Education using multilevel path analysis to model the probability that a student would be identified for participation in a gifted program. The model examined individual- and school-level factors that influence the probability that an…

  20. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  1. Complex analysis methods in noncommutative probability

    NASA Astrophysics Data System (ADS)

    Teodor Belinschi, Serban

    2006-02-01

    In this thesis we study convolutions that arise from noncommutative probability theory. We prove several regularity results for free convolutions, and for measures in partially defined one-parameter free convolution semigroups. We discuss connections between Boolean and free convolutions and, in the last chapter, we prove that any infinitely divisible probability measure with respect to monotonic additive or multiplicative convolution belongs to a one-parameter semigroup with respect to the corresponding convolution. Earlier versions of some of the results in this thesis have already been published, while some others have been submitted for publication. We have preserved almost entirely the specific format for PhD theses required by Indiana University. This adds several unnecessary pages to the document, but we wanted to preserve the specificity of the document as a PhD thesis at Indiana University.

  2. Application of the Conditioned Reverse Path Method

    NASA Astrophysics Data System (ADS)

    Garibaldi, L.

    2003-01-01

    The conditioned reverse path (CRP) method has been applied to identify the non-linear behaviour of a beam-like structure, both ends clamped, one with a non-linear stiffness characteristic. The same method was already successfully applied to the identification of another COST benchmark, known as the VTT non-linear suspension. This benchmark shows the enhancements of the technique, now applied to a real multi-degree-of-freedom (mdof) system, with single-point excitation subject to bending modes; the non-linearity is acting on one end of the beam in terms of displacements. The CRP technique is based on the construction of a hierarchy of uncorrelated response components in the frequency domain, allowing the estimation of the coefficients of the non-linearities away from the location of the applied excitation and also the identification of the linear dynamic compliance matrix when the number of excitations is smaller than the number of response locations.

  3. Probability Fluxes and Transition Paths in a Markovian Model Describing Complex Subunit Cooperativity in HCN2 Channels

    PubMed Central

    Benndorf, Klaus; Kusch, Jana; Schulz, Eckhard

    2012-01-01

    Hyperpolarization-activated cyclic nucleotide-modulated (HCN) channels are voltage-gated tetrameric cation channels that generate electrical rhythmicity in neurons and cardiomyocytes. Activation can be enhanced by the binding of adenosine-3′,5′-cyclic monophosphate (cAMP) to an intracellular cyclic nucleotide binding domain. Based on previously determined rate constants for a complex Markovian model describing the gating of homotetrameric HCN2 channels, we analyzed probability fluxes within this model, including unidirectional probability fluxes and the probability flux along transition paths. The time-dependent probability fluxes quantify the contributions of all 13 transitions of the model to channel activation. The binding of the first, third and fourth ligand evoked robust channel opening whereas the binding of the second ligand obstructed channel opening similar to the empty channel. Analysis of the net probability fluxes in terms of the transition path theory revealed pronounced hysteresis for channel activation and deactivation. These results provide quantitative insight into the complex interaction of the four structurally equal subunits, leading to non-equality in their function. PMID:23093920

  4. A new method for estimating extreme rainfall probabilities

    SciTech Connect

    Harper, G.A.; O'Hara, T.F. ); Morris, D.I. )

    1994-02-01

    As part of an EPRI-funded research program, the Yankee Atomic Electric Company developed a new method for estimating probabilities of extreme rainfall. It can be used, along with other techniques, to improve the estimation of probable maximum precipitation values for specific basins or regions.

  5. Exact transition probabilities in a 6-state Landau–Zener system with path interference

    DOE PAGES

    Sinitsyn, Nikolai A.

    2015-04-23

    In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.

  6. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  7. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  8. Guide waves-based multi-damage identification using a local probability-based diagnostic imaging method

    NASA Astrophysics Data System (ADS)

    Gao, Dongyue; Wu, Zhanjun; Yang, Lei; Zheng, Yuebin

    2016-04-01

    Multi-damage identification is an important and challenging task in the research of guide waves-based structural health monitoring. In this paper, a multi-damage identification method is presented using a guide waves-based local probability-based diagnostic imaging (PDI) method. The method includes a path damage judgment stage, a multi-damage judgment stage and a multi-damage imaging stage. First, damage imaging was performed by partition. The damage imaging regions are divided into beside damage signal paths. The difference in guide waves propagation characteristics between cross and beside damage paths is proposed by theoretical analysis of the guide wave signal feature. The time-of-flight difference of paths is used as a factor to distinguish between cross and beside damage paths. Then, a global PDI method (damage identification using all paths in the sensor network) is performed using the beside damage path network. If the global PDI damage zone crosses the beside damage path, it means that the discrete multi-damage model (such as a group of holes or cracks) has been misjudged as a continuum single-damage model (such as a single hole or crack) by the global PDI method. Subsequently, damage imaging regions are separated by beside damage path and local PDI (damage identification using paths in the damage imaging regions) is performed in each damage imaging region. Finally, multi-damage identification results are obtained by superimposing the local damage imaging results and the marked cross damage paths. The method is employed to inspect the multi-damage in an aluminum plate with a surface-mounted piezoelectric ceramic sensors network. The results show that the guide waves-based multi-damage identification method is capable of visualizing the presence, quantity and location of structural damage.

  9. A probability generating function method for stochastic reaction networks

    NASA Astrophysics Data System (ADS)

    Kim, Pilwon; Lee, Chang Hyeong

    2012-06-01

    In this paper we present a probability generating function (PGF) approach for analyzing stochastic reaction networks. The master equation of the network can be converted to a partial differential equation for PGF. Using power series expansion of PGF and Padé approximation, we develop numerical schemes for finding probability distributions as well as first and second moments. We show numerical accuracy of the method by simulating chemical reaction examples such as a binding-unbinding reaction, an enzyme-substrate model, Goldbeter-Koshland ultrasensitive switch model, and G2/M transition model.

  10. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that

  11. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  12. Goldstein-Kac telegraph processes with random speeds: Path probabilities, likelihoods, and reported Lévy flights

    NASA Astrophysics Data System (ADS)

    Sim, Aaron; Liepe, Juliane; Stumpf, Michael P. H.

    2015-04-01

    The Goldstein-Kac telegraph process describes the one-dimensional motion of particles with constant speed undergoing random changes in direction. Despite its resemblance to numerous real-world phenomena, the singular nature of the resultant spatial distribution of each particle precludes the possibility of any a posteriori empirical validation of this random-walk model from data. Here we show that by simply allowing for random speeds, the ballistic terms are regularized and that the diffusion component can be well-approximated via the unscented transform. The result is a computationally efficient yet robust evaluation of the full particle path probabilities and, hence, the parameter likelihoods of this generalized telegraph process. We demonstrate how a population diffusing under such a model can lead to non-Gaussian asymptotic spatial distributions, thereby mimicking the behavior of an ensemble of Lévy walkers.

  13. New method for estimating low-earth-orbit collision probabilities

    NASA Technical Reports Server (NTRS)

    Vedder, John D.; Tabor, Jill L.

    1991-01-01

    An unconventional but general method is described for estimating the probability of collision between an earth-orbiting spacecraft and orbital debris. This method uses a Monte Caralo simulation of the orbital motion of the target spacecraft and each discrete debris object to generate an empirical set of distances, each distance representing the separation between the spacecraft and the nearest debris object at random times. Using concepts from the asymptotic theory of extreme order statistics, an analytical density function is fitted to this set of minimum distances. From this function, it is possible to generate realistic collision estimates for the spacecraft.

  14. THE CRITICAL-PATH METHOD OF CONSTRUCTION CONTROL.

    ERIC Educational Resources Information Center

    DOMBROW, RODGER T.; MAUCHLY, JOHN

    THIS DISCUSSION PRESENTS A DEFINITION AND BRIEF DESCRIPTION OF THE CRITICAL-PATH METHOD AS APPLIED TO BUILDING CONSTRUCTION. INTRODUCING REMARKS CONSIDER THE MOST PERTINENT QUESTIONS PERTAINING TO CPM AND THE NEEDS ASSOCIATED WITH MINIMIZING TIME AND COST ON CONSTRUCTION PROJECTS. SPECIFIC DISCUSSION INCLUDES--(1) ADVANTAGES OF NETWORK TECHNIQUES,…

  15. Probability-theoretical analog of the vector Lyapunov function method

    SciTech Connect

    Nakonechnyi, A.N.

    1995-01-01

    The main ideas of the vector Lyapunov function (VLF) method were advanced in 1962 by Bellman and Matrosov. In this method, a Lyapunov function and a comparison equation are constructed for each subsystem. Then the dependences between the subsystems and the effect of external noise are allowed for by constructing a vector Lyapunov function (as a collection of the scalar Lyapunov functions of the subsystems) and an aggregate comparison function for the entire complex system. A probability-theoretical analog of this method for convergence analysis of stochastic approximation processes has been developed. The abstract approach proposed elsewhere eliminates all restrictions on the system phase space, the system trajectories, the class of Lyapunov functions, etc. The analysis focuses only on the conditions that relate sequences of Lyapunov function values with the derivative and ensure a particular type (mode, character) of stability. In our article, we extend this approach to the VLF method for discrete stochastic dynamic systems.

  16. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  17. CPM (Critical Path Method) as a Curriculum Tool.

    ERIC Educational Resources Information Center

    Mongerson, M. Duane

    This document discusses and illustrates the use of the Critical Path Method (CPM) as a tool for developing curriculum. In so doing a brief review of the evolution of CPM as a management tool developed by E. I. duPont de Nemours Company is presented. It is also noted that CPM is only a method of sequencing learning activities and not an end unto…

  18. Equivalent common path method in large-scale laser comparator

    NASA Astrophysics Data System (ADS)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  19. Probability of detection models for eddy current NDE methods

    SciTech Connect

    Rajesh, S.N.

    1993-04-30

    The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.

  20. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  1. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  2. Numerical methods for high-dimensional probability density function equations

    SciTech Connect

    Cho, H.; Venturi, D.; Karniadakis, G.E.

    2016-01-15

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker–Planck and Dostupov–Pugachev equations), random wave theory (Malakhov–Saichev equations) and coarse-grained stochastic systems (Mori–Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov–Born–Green–Kirkwood–Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  3. On path-following methods for structural failure problems

    NASA Astrophysics Data System (ADS)

    Stanić, Andjelka; Brank, Boštjan; Korelc, Jože

    2016-08-01

    We revisit the consistently linearized path-following method that can be applied in the nonlinear finite element analysis of solids and structures in order to compute a solution path. Within this framework, two constraint equations are considered: a quadratic one (that includes as special cases popular spherical and cylindrical forms of constraint equation), and another one that constrains only one degree-of-freedom (DOF), the critical DOF. In both cases, the constrained DOFs may vary from one solution increment to another. The former constraint equation is successful in analysing geometrically nonlinear and/or standard inelastic problems with snap-throughs, snap-backs and bifurcation points. However, it cannot handle problems with the material softening that are computed e.g. by the embedded-discontinuity finite elements. This kind of problems can be solved by using the latter constraint equation. The plusses and minuses of the both presented constraint equations are discussed and illustrated on a set of numerical examples. Some of the examples also include direct computation of critical points and branch switching. The direct computation of the critical points is performed in the framework of the path-following method by using yet another constraint function, which is eigenvector-free and suited to detect critical points.

  4. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  5. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  6. Parameterizing deep convection using the assumed probability density function method

    DOE PAGES

    Storer, R. L.; Griffin, B. M.; Höft, J.; ...

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  7. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  8. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  9. Identification of influential nodes in complex networks: Method from spreading probability viewpoint

    NASA Astrophysics Data System (ADS)

    Bao, Zhong-Kui; Ma, Chuang; Xiang, Bing-Bing; Zhang, Hai-Feng

    2017-02-01

    The problem of identifying influential nodes in complex networks has attracted much attention owing to its wide applications, including how to maximize the information diffusion, boost product promotion in a viral marketing campaign, prevent a large scale epidemic and so on. From spreading viewpoint, the probability of one node propagating its information to one other node is closely related to the shortest distance between them, the number of shortest paths and the transmission rate. However, it is difficult to obtain the values of transmission rates for different cases, to overcome such a difficulty, we use the reciprocal of average degree to approximate the transmission rate. Then a semi-local centrality index is proposed to incorporate the shortest distance, the number of shortest paths and the reciprocal of average degree simultaneously. By implementing simulations in real networks as well as synthetic networks, we verify that our proposed centrality can outperform well-known centralities, such as degree centrality, betweenness centrality, closeness centrality, k-shell centrality, and nonbacktracking centrality. In particular, our findings indicate that the performance of our method is the most significant when the transmission rate nears to the epidemic threshold, which is the most meaningful region for the identification of influential nodes.

  10. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  11. Improved transition path sampling methods for simulation of rare events.

    PubMed

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S; de Pablo, J J

    2008-04-14

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  12. Transition-Path Probability as a Test of Reaction-Coordinate Quality Reveals DNA Hairpin Folding Is a One-Dimensional Diffusive Process.

    PubMed

    Neupane, Krishna; Manuel, Ajay P; Lambert, John; Woodside, Michael T

    2015-03-19

    Chemical reactions are typically described in terms of progress along a reaction coordinate. However, the quality of reaction coordinates for describing reaction dynamics is seldom tested experimentally. We applied a framework for gauging reaction-coordinate quality based on transition-path analysis to experimental data for the first time, looking at folding trajectories of single DNA hairpin molecules measured under tension applied by optical tweezers. The conditional probability for being on a reactive transition path was compared with the probability expected for ideal diffusion over a 1D energy landscape based on the committor function. Analyzing measurements and simulations of hairpin folding where end-to-end extension is the reaction coordinate, after accounting for instrumental effects on the analysis, we found good agreement between transition-path and committor analyses for model two-state hairpins, demonstrating that folding is well-described by 1D diffusion. This work establishes transition-path analysis as a powerful new tool for testing experimental reaction-coordinate quality.

  13. Computational methods for long mean free path problems

    NASA Astrophysics Data System (ADS)

    Christlieb, Andrew Jason

    This document describes work being done on particle transport in long mean free path environments. Two non statistical computational models are developed based on the method of propagators, which can have significant advantages in accuracy and efficiency over other methods. The first model has been developed primarily for charged particle transport and the second primarily for neutral particle transport. Both models are intended for application to transport in complex geometry using irregular meshes. The transport model for charged particles was inspired by the notion of obtaining a simulation that could handle complex geometry and resolve the bulk and sheath characteristics of a discharge, in a reasonable amount of computation time. The charged particle transport model has been applied in a self- consistent manner to the ion motion in a low density inductively coupled discharge. The electrons were assumed to have a Boltzmann density distribution for the computation of the electric field. This work assumes cylindrical geometry and focuses on charge exchange collisions as the primary ion collisional effect that takes place in the discharge. The results are compared to fluid simulations. The neutral transport model was constructed to solve the steady state Boltzmann equation on 3-D arbitrary irregular meshes. The neutral transport model was developed with the intent of investigating gas glow on the scale of micro-electrical-mechanical systems (MEMS), and is meant for tracking multiple species. The advantage of these methods is that the step size is determined by the mean free path of the particles rather than the mesh employed in the simulation.

  14. Path Sampling Methods for Enzymatic Quantum Particle Transfer Reactions.

    PubMed

    Dzierlenga, M W; Varga, M J; Schwartz, S D

    2016-01-01

    The mechanisms of enzymatic reactions are studied via a host of computational techniques. While previous methods have been used successfully, many fail to incorporate the full dynamical properties of enzymatic systems. This can lead to misleading results in cases where enzyme motion plays a significant role in the reaction coordinate, which is especially relevant in particle transfer reactions where nuclear tunneling may occur. In this chapter, we outline previous methods, as well as discuss newly developed dynamical methods to interrogate mechanisms of enzymatic particle transfer reactions. These new methods allow for the calculation of free energy barriers and kinetic isotope effects (KIEs) with the incorporation of quantum effects through centroid molecular dynamics (CMD) and the full complement of enzyme dynamics through transition path sampling (TPS). Recent work, summarized in this chapter, applied the method for calculation of free energy barriers to reaction in lactate dehydrogenase (LDH) and yeast alcohol dehydrogenase (YADH). We found that tunneling plays an insignificant role in YADH but plays a more significant role in LDH, though not dominant over classical transfer. Additionally, we summarize the application of a TPS algorithm for the calculation of reaction rates in tandem with CMD to calculate the primary H/D KIE of YADH from first principles. We found that the computationally obtained KIE is within the margin of error of experimentally determined KIEs and corresponds to the KIE of particle transfer in the enzyme. These methods provide new ways to investigate enzyme mechanism with the inclusion of protein and quantum dynamics.

  15. Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel

    2011-01-01

    Archaeological sites are being compromised or destroyed at a catastrophic rate in most regions of the world. The best solution to this problem is for archaeologists to find and study these sites before they are compromised or destroyed. One way to facilitate the necessary rapid, wide area surveys needed to find these archaeological sites is through the generation of maps of probable archaeological sites from remotely sensed data. We describe an approach for identifying probable locations of archaeological sites over a wide area based on detecting subtle anomalies in vegetative cover through a statistically based analysis of remotely sensed data from multiple sources. We further developed this approach under a recent NASA ROSES Space Archaeology Program project. Under this project we refined and elaborated this statistical analysis to compensate for potential slight miss-registrations between the remote sensing data sources and the archaeological site location data. We also explored data quantization approaches (required by the statistical analysis approach), and we identified a superior data quantization approached based on a unique image segmentation approach. In our presentation we will summarize our refined approach and demonstrate the effectiveness of the overall approach with test data from Santa Catalina Island off the southern California coast. Finally, we discuss our future plans for further improving our approach.

  16. New Method For Classification of Avalanche Paths With Risks

    NASA Astrophysics Data System (ADS)

    Rapin, François

    After the Chamonix-Montroc avalanche event in February 1999, the French Ministry of the environment wanted to engage a new examination of the "sensitive avalanche paths", i.e. sites with stakes (in particular habitat) whose operation cannot be apprehended in a simple way. The ordered objective consisted in establishing a tool, a method, making it possible to identify them and to treat on a hierarchical basis them according to the risk which they generate, in order to later on as well as possible distribute the efforts of public policy. The proposed tool is based only on objective and quantifiable criteria, a priori of relatively fast access. These criteria are gathered in 4 groups : vulnerability concerned, the morphology of the site, known avalanche history, snow-climatology. Each criterion selected is affected by a " weight ", according to the group to which it belongs and relatively compared to the others. Thus this tool makes it possible to classify the sites subjected at one avalanche risk in a three dangerousness levels grid, which are: - low sensitivity: a priori the site does not deserve a particular avalanche study; - doubtful sensitivity: the site can deserve a study specifying the avalanche risk; - strong sensitivity: the site deserves a thorough study of the avalanche risk. According to conclusions' of these studies, existing measurements of prevention and risk management (zoning, protection, alert, help) will be examined and supplemented as a need. The result obtained by the application of the method by no means imposes the renewal of a thorough study of the avalanche risk which would exist beforehand. A priori less than one ten percent of the paths will be in a strong sensitivity. The present method is thus a new tool of decision-making aid for the first phase of identification and classification of the avalanche sites according to the risk which they generate. To be recognized and used under good conditions, this tool was worked out by the search for

  17. The universal path integral

    NASA Astrophysics Data System (ADS)

    Lloyd, Seth; Dreyer, Olaf

    2016-02-01

    Path integrals calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness. The probabilities for events corresponding to sub-integrals can be calculated using the method of decoherent histories. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures.

  18. A new parametric method of estimating the joint probability density

    NASA Astrophysics Data System (ADS)

    Alghalith, Moawia

    2017-04-01

    We present simple parametric methods that overcome major limitations of the literature on joint/marginal density estimation. In doing so, we do not assume any form of marginal or joint distribution. Furthermore, using our method, a multivariate density can be easily estimated if we know only one of the marginal densities. We apply our methods to financial data.

  19. A K-nearest neighbors survival probability prediction method.

    PubMed

    Lowsky, D J; Ding, Y; Lee, D K K; McCulloch, C E; Ross, L F; Thistlethwaite, J R; Zenios, S A

    2013-05-30

    We introduce a nonparametric survival prediction method for right-censored data. The method generates a survival curve prediction by constructing a (weighted) Kaplan-Meier estimator using the outcomes of the K most similar training observations. Each observation has an associated set of covariates, and a metric on the covariate space is used to measure similarity between observations. We apply our method to a kidney transplantation data set to generate patient-specific distributions of graft survival and to a simulated data set in which the proportional hazards assumption is explicitly violated. We compare the performance of our method with the standard Cox model and the random survival forests method.

  20. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  1. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  2. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  3. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  4. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  5. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  6. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  7. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  8. An improved path flux analysis with multi generations method for mechanism reduction

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Gou, Xiaolong

    2016-03-01

    An improved path flux analysis with a multi generations (IMPFA) method is proposed to eliminate unimportant species and reactions, and to generate skeletal mechanisms. The production and consumption path fluxes of each species at multiple reaction paths are calculated and analysed to identify the importance of the species and of the elementary reactions. On the basis of the indexes of each reaction path of the first, second, and third generations, the improved path flux analysis with two generations (IMPFA2) and improved path flux analysis with three generations (IMPFA3) are used to generate skeletal mechanisms that contain different numbers of species. The skeletal mechanisms are validated in the case of homogeneous autoignition and perfectly stirred reactor of methane and n-decane/air mixtures. Simulation results of the skeletal mechanisms generated by IMPFA2 and IMPFA3 are compared with those obtained by path flux analysis (PFA) with two and three generations, respectively. The comparisons of ignition delay times, final temperatures, and temperature dependence on flow residence time show that the skeletal mechanisms generated by the present IMPFA method are more accurate than those obtained by the PFA method, with almost the same number of species under a range of initial conditions. By considering the accuracy and computational efficiency, when using the IMPFA (or PFA) method, three generations may be the best choice for the reduction of large-scale detailed chemistry.

  9. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  10. Transition Path Sampling Method and Its Application in Argon Phase Transition

    NASA Astrophysics Data System (ADS)

    Li, Bingxi

    Rare events during both physical and chemical transitions are of great significance to under- stand the evolution of systems from one stable state to another. Solid-solid phase transition is a fundamental problem in this field and a lot of experimental and theoretical efforts have been made into tackling it. However Molecular Dynamics simulation in this field encounters the problem that these transitions occur too rarely to be observed within current simulations. Thus the Transition Path Sampling (TPS) method is designed to tackle this issue. The phase transition between face centered cubic (fcc) and hexagonal close packed (hcp) phases in argon solid at 40K is investigated with TPS method. TPS is a rare event sampling methodology, which combine Molecular Dynamics and Monte Carlo. Molecular Dynamics is used to generate the whole trajectory from an assigned starting point, based on the time evolution of the system. Monte Carlo is applied to select a structure from the known phase transition trajectories as the starting point. This is an importance sampling process and the acceptance probability for starting point selection depends on its equilibrium probability in the ensemble of interest. With TPS method, the sampling of trajectories can be efficiently performed in the phase transition trajectories ensemble. The sampling process will yield energetically favorable trajectories. A phase transition trajectory is required to initialize the Molecular Dynamics Transition Path Sampling process. This trajectory can be generated with Variable Cell Nudged Elastic Band (VCNEB) method, which determines Ar fcc to hcp transition at 0K. The configuration of transition state in VCNEB trajectory is selected as the initial state to start TPS calculation. An atomistic description of the mechanism of the fcc-to- hcp transformation in solid argon is then obtained from Molecular Dynamics transition path sampling simulations. We show that the transition barrier at 40 K under ambient

  11. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  12. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  13. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of

  14. Probability estimation with machine learning methods for dichotomous and multicategory outcome: theory.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Biau, Gérard; Kohler, Michael; König, Inke R; Malley, James D; Ziegler, Andreas

    2014-07-01

    Probability estimation for binary and multicategory outcome using logistic and multinomial logistic regression has a long-standing tradition in biostatistics. However, biases may occur if the model is misspecified. In contrast, outcome probabilities for individuals can be estimated consistently with machine learning approaches, including k-nearest neighbors (k-NN), bagged nearest neighbors (b-NN), random forests (RF), and support vector machines (SVM). Because machine learning methods are rarely used by applied biostatisticians, the primary goal of this paper is to explain the concept of probability estimation with these methods and to summarize recent theoretical findings. Probability estimation in k-NN, b-NN, and RF can be embedded into the class of nonparametric regression learning machines; therefore, we start with the construction of nonparametric regression estimates and review results on consistency and rates of convergence. In SVMs, outcome probabilities for individuals are estimated consistently by repeatedly solving classification problems. For SVMs we review classification problem and then dichotomous probability estimation. Next we extend the algorithms for estimating probabilities using k-NN, b-NN, and RF to multicategory outcomes and discuss approaches for the multicategory probability estimation problem using SVM. In simulation studies for dichotomous and multicategory dependent variables we demonstrate the general validity of the machine learning methods and compare it with logistic regression. However, each method fails in at least one simulation scenario. We conclude with a discussion of the failures and give recommendations for selecting and tuning the methods. Applications to real data and example code are provided in a companion article (doi:10.1002/bimj.201300077).

  15. Constructing inverse probability weights for continuous exposures: a comparison of methods.

    PubMed

    Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S

    2014-03-01

    Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.

  16. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems.

    PubMed

    Branduardi, Davide; Faraldo-Gómez, José D

    2013-09-10

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string.

  17. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems

    PubMed Central

    Branduardi, Davide; Faraldo-Gómez, José D.

    2014-01-01

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string. PMID

  18. Nonlinear identification of base-isolated buildings by reverse path method

    NASA Astrophysics Data System (ADS)

    Xie, Liyu; Mita, Akira

    2009-03-01

    The performance of reverse path methods applied to identify the underlying linear model of base-isolated structures is investigated. The nonlinear rubber bearings are considered as nonlinear components attached to an underlying linear model. The advantage of reverse path formulation is that it can separate the linearity and nonlinearity of the structure, extract the nonlinearity and identify the underlying linear structure. The difficulty lies in selecting the nonlinearity function of the hysteretic force due to its multi-valued property and path-dependence. In the thesis, the hysteretic force is approximated by the polynomial series of displacement and velocity. The reverse path formulation is solved by Nonlinear Identification through Feedback of Output (NIFO) methods using least-square solution. Numerical simulation is carried out to investigate the identification performance.

  19. Method for path imbalance measurement of the two-arm fiber-optic interferometer.

    PubMed

    Huang, Shih-Chu; Lin, Hermann

    2008-10-01

    The path imbalance (PI) of the two-arm fiber-optic interferometric sensor is a substantial parameter; a precise value of millimeters is required. Currently the precision reflectometry and the millimeter optical time-domain reflectometry are used to measure the tiny optical path difference, but the performances of these measurements are limited from the length and the resolution of the PI. We propose a new method accomplished by interferometer to accurately measure millimeters to within a few decimeters of the PI.

  20. Performance of methods for estimating the effect of covariates on group membership probabilities in group-based trajectory models.

    PubMed

    Davies, Christopher E; Giles, Lynne C; Glonek, Gary Fv

    2017-01-01

    One purpose of a longitudinal study is to gain insight of how characteristics at earlier points in time can impact on subsequent outcomes. Typically, the outcome variable varies over time and the data for each individual can be used to form a discrete path of measurements, that is a trajectory. Group-based trajectory modelling methods seek to identify subgroups of individuals within a population with trajectories that are more similar to each other than to trajectories in distinct groups. An approach to modelling the influence of covariates measured at earlier time points in the group-based setting is to consider models wherein these covariates affect the group membership probabilities. Models in which prior covariates impact the trajectories directly are also possible but are not considered here. In the present study, we compared six different methods for estimating the effect of covariates on the group membership probabilities, which have different approaches to account for the uncertainty in the group membership assignment. We found that when investigating the effect of one or several covariates on a group-based trajectory model, the full likelihood approach minimized the bias in the estimate of the covariate effect. In this '1-step' approach, the estimation of the effect of covariates and the trajectory model are carried out simultaneously. Of the '3-step' approaches, where the effect of the covariates is assessed subsequent to the estimation of the group-based trajectory model, only Vermunt's improved 3 step resulted in bias estimates similar in size to the full likelihood approach. The remaining methods considered resulted in considerably higher bias in the covariate effect estimates and should not be used. In addition to the bias empirically demonstrated for the probability regression approach, we have shown analytically that it is biased in general.

  1. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  2. Selective flow path alpha particle detector and method of use

    DOEpatents

    Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore

    2002-01-01

    A method and apparatus for monitoring alpha contamination are provided in which ions generated in the air surrounding the item, by the passage of alpha particles, are moved to a distant detector location. The parts of the item from which ions are withdrawn can be controlled by restricting the air flow over different portions of the apparatus. In this way, detection of internal and external surfaces separately, for instance, can be provided. The apparatus and method are particularly suited for use in undertaking alpha contamination measurements during the commissioning operations.

  3. A method of classification for multisource data in remote sensing based on interval-valued probabilities

    NASA Technical Reports Server (NTRS)

    Kim, Hakil; Swain, Philip H.

    1990-01-01

    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.

  4. Path Planning Method for UUV Homing and Docking in Movement Disorders Environment

    PubMed Central

    Yan, Zheping; Deng, Chao; Chi, Dongnan; Hou, Shuping

    2014-01-01

    Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path. PMID:25054169

  5. Method and apparatus for monitoring characteristics of a flow path having solid components flowing therethrough

    DOEpatents

    Hoskinson, Reed L.; Svoboda, John M.; Bauer, William F.; Elias, Gracy

    2008-05-06

    A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.

  6. Evaluation of the Fokker-Planck probability by Asymptotic Taylor Expansion Method

    NASA Astrophysics Data System (ADS)

    Firat, Kenan; Ozer, Okan

    2017-02-01

    The one-dimensional Fokker-Planck equation is solved by the Asymptotic Taylor Expansion Method for the time-dependent probability density of a particle. Using an ansatz wave function, one obtains the series expansion of the solution for the Schrödinger and it allows one to find out the eigen functions and eigen energies of the states to the evaluation of the probability. The eigen energies of some certain kind of Bistable potentials are calculated for some certain potential parameters. The probability function is determined and graphed for potential parameters. The numerical results are compared with existing literature, and a conclusion about the advantages and disadvantages on the method is given.

  7. Estimating the Probability of Asteroid Collision with the Earth by the Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Chernitsov, A. M.; Tamarov, V. A.; Barannikov, E. A.

    2016-09-01

    The commonly accepted method of estimating the probability of asteroid collision with the Earth is investigated on an example of two fictitious asteroids one of which must obviously collide with the Earth and the second must pass by at a dangerous distance from the Earth. The simplest Kepler model of motion is used. Confidence regions of asteroid motion are estimated by the Monte Carlo method. Two variants of constructing the confidence region are considered: in the form of points distributed over the entire volume and in the form of points mapped onto the boundary surface. The special feature of the multidimensional point distribution in the first variant of constructing the confidence region that can lead to zero probability of collision for bodies that collide with the Earth is demonstrated. The probability estimates obtained for even considerably smaller number of points in the confidence region determined by its boundary surface are free from this disadvantage.

  8. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    PubMed

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  9. A variational approach to path planning in three dimensions using level set methods

    NASA Astrophysics Data System (ADS)

    Cecil, Thomas; Marthaler, Daniel E.

    2006-01-01

    In this paper we extend the two-dimensional methods set forth in [T. Cecil, D. Marthaler, A variational approach to search and path planning using level set methods, UCLA CAM Report, 04-61, 2004], proposing a variational approach to a path planning problem in three dimensions using a level set framework. After defining an energy integral over the path, we use gradient flow on the defined energy and evolve the entire path until a locally optimal steady state is reached. We follow the framework for motion of curves in three dimensions set forth in [P. Burchard, L.-T. Cheng, B. Merriman, S. Osher, Motion of curves in three spatial dimensions using a level set approach, J. Comput. Phys. 170(2) (2001) 720-741], modified appropriately to take into account that we allow for paths with positive, varying widths. Applications of this method extend to robotic motion and visibility problems, for example. Numerical methods and algorithms are given, and examples are presented.

  10. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  11. The "Closed School-Cluster" Method of Selecting a Probability Sample.

    ERIC Educational Resources Information Center

    Shaycoft, Marion F.

    In some educational research studies--particularly longitudinal studies requiring a probability sample of schools and spanning a wide range of grades--it is desirable to so select the sample that schools at different levels (e.g., elementary and secondary) "correspond." This has often proved unachievable, using standard methods of selecting school…

  12. A fast tomographic method for searching the minimum free energy path

    SciTech Connect

    Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

    2014-10-21

    Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP.

  13. Investigation of methods for calibration of classifier scores to probability of disease

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Sahiner, Berkman; Samuelson, Frank; Pezeshk, Aria; Petrick, Nicholas

    2015-03-01

    Classifier scores in many diagnostic devices, such as computer-aided diagnosis systems, are usually on an arbitrary scale, the meaning of which is unclear. Calibration of classifier scores to a meaningful scale such as the probability of disease is potentially useful when such scores are used by a physician or another algorithm. In this work, we investigated the properties of two methods for calibrating classifier scores to probability of disease. The first is a semiparametric method in which the likelihood ratio for each score is estimated based on a semiparametric proper receiver operating characteristic model, and then an estimate of the probability of disease is obtained using the Bayes theorem assuming a known prevalence of disease. The second method is nonparametric in which isotonic regression via the pool-adjacent-violators algorithm is used. We employed the mean square error (MSE) and the Brier score to evaluate the two methods. We evaluate the methods under two paradigms: (a) the dataset used to construct the score-to-probability mapping function is used to calculate the performance metric (MSE or Brier score) (resubstitution); (b) an independent test dataset is used to calculate the performance metric (independent). Under our simulation conditions, the semiparametric method is found to be superior to the nonparametric method at small to medium sample sizes and the two methods appear to converge at large sample sizes. Our simulation results also indicate that the resubstitution bias may depend on the performance metric and, for the semiparametric method, the resubstitution bias is small when a reasonable number of cases (> 100 cases per class) are available.

  14. A method to deconvolve stellar rotational velocities II. The probability distribution function via Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Christen, Alejandra; Escarate, Pedro; Curé, Michel; Rial, Diego F.; Cassetti, Julia

    2016-10-01

    Aims: Knowing the distribution of stellar rotational velocities is essential for understanding stellar evolution. Because we measure the projected rotational speed v sin i, we need to solve an ill-posed problem given by a Fredholm integral of the first kind to recover the "true" rotational velocity distribution. Methods: After discretization of the Fredholm integral we apply the Tikhonov regularization method to obtain directly the probability distribution function for stellar rotational velocities. We propose a simple and straightforward procedure to determine the Tikhonov parameter. We applied Monte Carlo simulations to prove that the Tikhonov method is a consistent estimator and asymptotically unbiased. Results: This method is applied to a sample of cluster stars. We obtain confidence intervals using a bootstrap method. Our results are in close agreement with those obtained using the Lucy method for recovering the probability density distribution of rotational velocities. Furthermore, Lucy estimation lies inside our confidence interval. Conclusions: Tikhonov regularization is a highly robust method that deconvolves the rotational velocity probability density function from a sample of v sin i data directly without the need for any convergence criteria.

  15. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  16. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  17. Most probable numbers of organisms: revised tables for the multiple tube method.

    PubMed

    Tillett, H E

    1987-10-01

    Estimation of numbers of organisms is often made using dilution series, for example when examining water samples for coliform organisms. In this paper the most probable numbers (MPNs) are calculated for a 15-tube series consisting of five replicates at three consecutive tenfold dilutions. Exact conditional probabilities are computed to replace previous approximations. When growth is observed in several of the tubes it is not realistic to select a single MPN. Instead a most probable range (MPR) should be reported. But using an MPR creates problems when comparison has to be made with a legislated, single-valued Standard. It is suggested that the wording of the Standards should be expressed differently when the multiple tube method is used.

  18. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.

  19. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  20. A parallel multiple path tracing method based on OptiX for infrared image generation

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu

    2015-12-01

    Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.

  1. Enumeration of fungi in fruits by the most probable number method.

    PubMed

    Watanabe, Maiko; Tsutsumi, Fumiyuki; Lee, Ken-ichi; Sugita-Konishi, Yoshiko; Kumagai, Susumu; Takatori, Kosuke; Hara-Kudo, Yukiko; Konuma, Hirotaka

    2010-01-01

    In this study, enumeration methods for fungi in foods were evaluated using fruits that are often contaminated by fungi in the field and rot because of fungal contaminants. As the test methods, we used the standard most probable number (MPN) method with liquid medium in test tubes, which is traditionally used as the enumeration method for bacteria, and the plate-MPN method with agar plate media, in addition to the surface plating method as the traditional enumeration method for fungi. We tested 27 samples of 9 commercial domestic fruits using their surface skin. The results indicated that the standard MPN method showed slow recovery of fungi in test tubes and lower counts than the surface plating method and the plate-MPN method in almost all samples. The fungal count on the 4th d of incubation was approximately the same as on the 10th d by the surface plating method or the plate-MPN method, indicating no significant differences between the fungal counts by these 2 methods. This result indicated that the plate-MPN method had a number agreement with the traditional enumeration method. Moreover, the plate-MPN method has a little laborious without counting colonies, because fungal counts are estimated based on the number of plates with growing colonies. These advantages demonstrated that the plate-MPN method is a comparatively superior and rapid method for enumeration of fungi.

  2. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  3. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  4. Approximation of Integrals Via Monte Carlo Methods, With An Application to Calculating Radar Detection Probabilities

    DTIC Science & Technology

    2005-03-01

    synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross

  5. The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen; Leighton, Tom; Miller, Gary L.

    1997-01-01

    We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.

  6. Fast method for the estimation of impact probability of near-Earth objects

    NASA Astrophysics Data System (ADS)

    Vavilov, D.; Medvedev, Y.

    2014-07-01

    We propose a method to estimate the probability of collision of a celestial body with the Earth (or another major planet) at a given time moment t. Let there be a set of observations of a small body. At initial time moment T_0, a nominal orbit is defined by the least squares method. In our method, a unique coordinate system is used. It is supposed that errors of observations are related to errors of coordinates and velocities linearly and the distribution law of observation errors is normal. The unique frame is defined as follows. First of all, we fix an osculating ellipse of the body's orbit at the time moment t. The mean anomaly M in this osculating ellipse is a coordinate of the introduced system. The spatial coordinate ξ is perpendicular to the plane which contains the fixed ellipse. η is a spatial coordinate, too, and our axes satisfy the right-hand rule. The origin of ξ and η corresponds to the given M point on the ellipse. The components of the velocity are the corresponding derivatives of dotξ, dotη, dot{M}. To calculate the probability of collision, we numerically integrate equations of an asteroid's motion taking into account perturbations and calculate a normal matrix N. The probability is determinated as follows: P = {|detN|^{ {1}/{2} }}/{ (2π)^3 } int_Ω e^{ - {1}/{2} x^T N x } dx where x denotes a six-dimensional vector of coordinates and velocities, Ω is the region which is occupied by the Earth, and the superscript T denotes the matrix transpose operation. To take into account a gravitational attraction of the Earth, the radius of the Earth is increased by √{1 + {v_s^2}/{v_{rel}^2} } times, where v_s is the escape velocity and v_{rel} is the small body's velocity relative to the Earth. The 6-dimensional integral is analytically integrated over the velocity components on (-∞,+∞). After that we have the 3×3 matrix N_1. That 6-dimensional integral becomes a 3-dimensional integral. To calculate it quickly we do the following. We introduce

  7. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  8. Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.

    PubMed

    Joshi, Niranjan; Kadir, Timor; Brady, Michael

    2011-08-01

    Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.

  9. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    SciTech Connect

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

  10. A Bio-Inspired Method for the Constrained Shortest Path Problem

    PubMed Central

    Wang, Hongping; Lu, Xi; Wang, Qing

    2014-01-01

    The constrained shortest path (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortest path problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method. PMID:24959603

  11. Robot Path Generation Method for a Welding System Based on Pseudo Stereo Visual Servo Control

    NASA Astrophysics Data System (ADS)

    Pachidis, Theodore P.; Tarchanidis, Kostas N.; Lygouras, John N.; Tsalides, Philippos G.

    2005-12-01

    A path generation method for robot-based welding systems is proposed. The method that is a modification of the method "teaching by showing" is supported by the recently developed pseudo stereovision system (PSVS). A path is generated by means of the target-object (TOB), PSVS, and the pseudo stereo visual servo control scheme proposed. A part of the new software application, called humanPT, permits the communication of a user with the robotic system. Here, PSVS, the robotic system, the TOB, the estimation of robot poses by means of the TOB, and the control and recording algorithm are described. Some new concepts concerning segmentation and point correspondence are applied as a complex image is processed. A method for calibrating the endpoint of TOB is also explained. Experimental results demonstrate the effectiveness of the proposed system.

  12. A bio-inspired method for the constrained shortest path problem.

    PubMed

    Wang, Hongping; Lu, Xi; Zhang, Xiaoge; Wang, Qing; Deng, Yong

    2014-01-01

    The constrained shortest path (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortest path problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method.

  13. A probability density function discretization and approximation method for the dynamic load identification of stochastic structures

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Sun, Xingsheng; Li, Kun; Jiang, Chao; Han, Xu

    2015-11-01

    Aiming at structures containing random parameters with multi-peak probability density functions (PDFs) or great variable coefficients, an analytical method of probability density function discretization and approximation (PDFDA) is proposed for dynamic load identification. Dynamic loads are expressed as the functions of time and random parameters in time domain and the forward model is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions. The PDF of each random parameter is discretized into several subintervals and in each subinterval the original PDF curve is approximated via uniform distribution PDF with equal probability value. Then the joint distribution model is built and hence the equivalent deterministic equations are solved to identify unknown loads. Inverse analysis is operated separately at each variable in the joint distribution model through regularization because of noise-contaminated measured responses. In order to assess the accuracy of identified results, PDF curves and statistical properties of loads are achieved based on the specially assumed distributions of identified loads. Numerical simulations demonstrate the efficiency and superiority of the presented method.

  14. RATIONAL DETERMINATION METHOD OF PROBABLE FREEZING INDEX FOR n-YEARS CONSIDERING THE REGIONAL CHARACTERISTICS

    NASA Astrophysics Data System (ADS)

    Kawabata, Shinichiro; Hayashi, Keiji; Kameyama, Shuichi

    This paper investigates a method for ob taining the probable freezing index for n -years from past frostaction damage and meteorological data. From investigati on of Japanese cold winter data from the areas of Hokkaido, Tohoku and south of Tohoku, it was found that the extent of cold winter had regularity by location south or north. Also, after obtaining return periods of cold winters by area, obvious regional characteristics were found. Mild winters are rare in Hokkaido. However, it was clarified that when Hokkaido had cold winters, its size increased. It wa s effective to determine the probable freezing indices as 20-, 15- and 10-year return periods for Hokkaido, Tohoku and south of Tohoku, respectively.

  15. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  16. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method.

    PubMed

    Alani, Amir M; Faramarzi, Asaad

    2015-06-10

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes.

  17. [Comparison of correction methods for nonlinear optic path difference of reflecting rotating Fourier transform spectrometer].

    PubMed

    Jing, Juan-Juan; Zhou, Jin-Song; Xiangli, Bin; Lü, Qun-Bo; Wei, Ru-Yi

    2010-06-01

    The principle of reflecting rotating Fourier transform spectrometer was introduced in the present paper. The nonlinear problem of optical path difference (OPD) of rotating Fourier transform spectrometer universally exists, produced by the rotation of rotating mirror. The nonlinear OPD will lead to fictitious recovery spectrum, so it is necessary to compensate the nonlinear OPD. Three methods of correction for the nonlinear OPD were described and compared in this paper, namely NUFFT method, OPD replace method and interferograms fitting method. The result indicates that NUFFT was the best method for the compensation of nonlinear OPD, OPD replace method was better, its precision was almost the same as NUFFT method, and their relative error are superior to 0.13%, but the computation efficiency of OPD replace method is slower than NUFFT method, while the precision and computation efficiency of interferograms fitting method are not so satisfied, because the interferograms are rapid fluctuant especially around the zero optical path difference, so it is unsuitable for polynomial fitting, and because this method needs piecewise fitting, its computation efficiency is the slowest, thus the NUFFT method is the most suited method for the nonlinear OPD compensation of reflecting rotating Fourier transform spectrometer.

  18. Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors.

    PubMed

    Peng, Xi; Zhao, Bo; Yan, Rui; Tang, Huajin; Yi, Zhang

    2016-03-18

    Address event representation (AER) image sensors represent the visual information as a sequence of events that denotes the luminance changes of the scene. In this paper, we introduce a feature extraction method for AER image sensors based on the probability theory, namely, bag of events (BOE). The proposed approach represents each object as the joint probability distribution of the concurrent events, and each event corresponds to a unique activated pixel of the AER sensor. The advantages of BOE include: 1) it is a statistical learning method and has a good interpretability in mathematics; 2) BOE can significantly reduce the effort to tune parameters for different data sets, because it only has one hyperparameter and is robust to the value of the parameter; 3) BOE is an online learning algorithm, which does not require the training data to be collected in advance; 4) BOE can achieve competitive results in real time for feature extraction (>275 frames/s and >120,000 events/s); and 5) the implementation complexity of BOE only involves some basic operations, e.g., addition and multiplication. This guarantees the hardware friendliness of our method. The experimental results on three popular AER databases (i.e., MNIST-dynamic vision sensor, Poker Card, and Posture) show that our method is remarkably faster than two recently proposed AER categorization systems while preserving a good classification accuracy.

  19. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given.

  20. A backward Monte Carlo method for efficient computation of runaway probabilities in runaway electron simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Guannan; Del-Castillo-Negrete, Diego

    2016-10-01

    Kinetic descriptions of RE are usually based on the bounced-averaged Fokker-Planck model that determines the PDFs of RE in the 2 dimensional momentum space. Despite of the simplification involved, the Fokker-Planck equation can rarely be solved analytically and direct numerical approaches (e.g., continuum and particle-based Monte Carlo (MC)) can be time consuming specially in the computation of asymptotic-type observable including the runaway probability, the slowing-down and runaway mean times, and the energy limit probability. Here we present a novel backward MC approach to these problems based on backward stochastic differential equations (BSDEs). The BSDE model can simultaneously describe the PDF of RE and the runaway probabilities by means of the well-known Feynman-Kac theory. The key ingredient of the backward MC algorithm is to place all the particles in a runaway state and simulate them backward from the terminal time to the initial time. As such, our approach can provide much faster convergence than the brute-force MC methods, which can significantly reduce the number of particles required to achieve a prescribed accuracy. Moreover, our algorithm can be parallelized as easy as the direct MC code, which paves the way for conducting large-scale RE simulation. This work is supported by DOE FES and ASCR under the Contract Numbers ERKJ320 and ERAT377.

  1. Path-integral methods for analyzing the effects of fluctuations in stochastic hybrid neural networks.

    PubMed

    Bressloff, Paul C

    2015-01-01

    We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous

  2. Calculation of Coster-Kronig energies and transition probabilities by linear interpolation method

    NASA Astrophysics Data System (ADS)

    Trivedi, R. K.; Shrivastava, Uma; Hinge, V. K.; Shrivastava, B. D.

    2016-10-01

    The X-ray emission spectrum consists of two types of spectral lines heaving different origins. The diagram lines originate because of transitions in singly ionized atom, while the nondiagram lines or satellites originate due to transitions in doubly or multiply ionized atom. The X- ray satellite energy is the difference between the energies of initial and final states which are both doubly or multiply ionized. Thus, the satellite has a different energy than the energy of the X-ray diagram line. Once the singly ionized state has been created, it is the probability of a particular subsequent process that will lead to the formation of two-hole state. The single hole may get converted through a Coster-Kronig transition to a double hole state. The probability of formation of double hole state via this process is written as σ.σ', where σ is the probability of creation of single hole state and σ' is the probability of the Coster-Kronig transition. The value of σ' can be taken from the tables of Chen et al. [1], who have presented the calculated values of σ' for almost all possible Coster-Kronig transitions in some elements. The energies of the satellites can be calculated by using the tables of Parente et al. [2]. Both of these tables do not give values for all the elements. The aim of the present work is to show that the values for other elements, for which values are not listed by Chen et al. and Parente et al., can be calculated by linear interpolation method.

  3. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    NASA Astrophysics Data System (ADS)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  4. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-07

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  5. Ab initio Path Integral Molecular Dynamics Based on Fragment Molecular Orbital Method

    NASA Astrophysics Data System (ADS)

    Fujita, Takatoshi; Watanabe, Hirofumi; Tanaka, Shigenori

    2009-10-01

    We have developed an ab initio path integral molecular dynamics method based on the fragment molecular orbital method. This “FMO-PIMD” method can treat both nuclei and electrons quantum mechanically, and is useful to simulate large hydrogen-bonded systems with high accuracy. After a benchmark calculation for water monomer, water trimer and glycine pentamer have been studied using the FMO-PIMD method to investigate nuclear quantum effects on structure and molecular interactions. The applicability of the present approach is demonstrated through a number of test calculations.

  6. Neutron Distribution in the Nuclear Fuel Cell using Collision Probability Method with Quadratic Flux Approach

    NASA Astrophysics Data System (ADS)

    Shafii, M. A.; Fitriyani, D.; Tongkukut, S. H. J.; Abdullah, A. G.

    2017-03-01

    To solve the integral neutron transport equation using collision probability (CP) method usually requires flat flux (FF) approach. In this research, it has been carried out in the cylindrical nuclear fuel cell with the spatial of mesh with quadratic flux approach. This means that the neutron flux at any region of the nuclear fuel cell is forced to follow the pattern of a quadratic function. The mechanism may be referred to as the process of non-flat flux (NFF) approach. The parameters that calculated in this study are the k-eff and the distribution of neutron flux. The result shows that all parameters are in accordance with the result of SRAC.

  7. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E.; Valentine, John D.; Beauchamp, Brock R.

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  8. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  9. The aggregate path coupling method for the Potts model on bipartite graph

    NASA Astrophysics Data System (ADS)

    Hernández, José C.; Kovchegov, Yevgeniy; Otto, Peter T.

    2017-02-01

    In this paper, we derive the large deviation principle for the Potts model on the complete bipartite graph Kn,n as n increases to infinity. Next, for the Potts model on Kn,n, we provide an extension of the method of aggregate path coupling that was originally developed in the work of Kovchegov, Otto, and Titus [J. Stat. Phys. 144(5), 1009-1027 (2011)] for the mean-field Blume-Capel model and in Kovchegov and Otto [J. Stat. Phys. 161(3), 553-576 (2015)] for a general mean-field setting that included the generalized Curie-Weiss-Potts model analyzed in the work of Jahnel et al. [Markov Process. Relat. Fields 20, 601-632 (2014)]. We use the aggregate path coupling method to identify and determine the threshold value βs separating the rapid and slow mixing regimes for the Glauber dynamics of the Potts model on Kn,n.

  10. Perturbative method for the derivation of quantum kinetic theory based on closed-time-path formalism.

    PubMed

    Koide, Jun

    2002-02-01

    Within the closed-time-path formalism, a perturbative method is presented, which reduces the microscopic field theory to the quantum kinetic theory. In order to make this reduction, the expectation value of a physical quantity must be calculated under the condition that the Wigner distribution function is fixed, because it is the independent dynamical variable in the quantum kinetic theory. It is shown that when a nonequilibrium Green function in the form of the generalized Kadanoff-Baym ansatz is utilized, this condition appears as a cancellation of a certain part of contributions in the diagrammatic expression of the expectation value. Together with the quantum kinetic equation, which can be derived in the closed-time-path formalism, this method provides a basis for the kinetic-theoretical description.

  11. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    SciTech Connect

    Shafii, Mohammad Ali Meidianti, Rahma Wildian, Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  12. A Monte Carlo method for the PDF (Probability Density Functions) equations of turbulent flow

    NASA Astrophysics Data System (ADS)

    Pope, S. B.

    1980-02-01

    The transport equations of joint probability density functions (pdfs) in turbulent flows are simulated using a Monte Carlo method because finite difference solutions of the equations are impracticable, mainly due to the large dimensionality of the pdfs. Attention is focused on equation for the joint pdf of chemical and thermodynamic properties in turbulent reactive flows. It is shown that the Monte Carlo method provides a true simulation of this equation and that the amount of computation required increases only linearly with the number of properties considered. Consequently, the method can be used to solve the pdf equation for turbulent flows involving many chemical species and complex reaction kinetics. Monte Carlo calculations of the pdf of temperature in a turbulent mixing layer are reported. These calculations are in good agreement with the measurements of Batt (1977).

  13. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  14. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  15. Estimation of probable maximum precipitation for catchments in eastern India by a generalized method

    NASA Astrophysics Data System (ADS)

    Rakhecha, P. R.; Mandal, B. N.; Kulkarni, A. K.; Deshpande, N. R.

    1995-03-01

    A generalized method to estimate the probable maximum precipitation (PMP) has been developed for catchments in eastern India (80° E, 18° N) by pooling together all the major rainstorms that have occurred in this area. The areal raindepths of these storms are normalized for factors such as storm dew point temperature, distance of the storm from the coast, topographic effects and any intervening mountain barriers between the storm area and the moisture source. The normalized values are then applied, with appropriate adjustment factors in estimating PMP raindepths, to the Subarnarekha river catchment (upto the Chandil dam site) with an area of 5663 km2. The PMP rainfall for 1, 2 and 3 days were found to be roughly 53 cm, 78 cm and 98 cm, respectively. It is expected that the application of the generalized method proposed here will give more reliable estimates of PMP for different duration rainfall events.

  16. Tunnel-construction methods and foraging path of a fossorial herbivore, Geomys bursarius

    USGS Publications Warehouse

    Andersen, Douglas C.

    1988-01-01

    The fossorial rodent Geomys bursarius excavates tunnels to find and gain access to belowground plant parts. This is a study of how the foraging path of this animal, as denoted by feeding-tunnel systems constructed within experimental gardens, reflects both adaptive behavior and constraints associated with the fossorial lifestyle. The principal method of tunnel construction involves the end-to-end linking of short, linear segments whose directionalities are bimodal, but symmetrically distributed about 0°. The sequence of construction of left- and right-directed segments is random, and segments tend to be equal in length. The resulting tunnel advances, zigzag-fashion, along a single heading. This linearity, and the tendency for branches to be orthogonal to the originating tunnel, are consistent with the search path predicted for a "harvesting animal" (Pyke, 1978) from optimal-foraging theory. A suite of physical and physiological constraints on the burrowing process, however, may be responsible for this geometric pattern. That is, by excavating in the most energy-efficient manner, G. bursarius automatically creates the basic components to an optimal-search path. The general search pattern was not influenced by habitat quality (plant density). Branch origins are located more often than expected at plants, demonstrating area-restricted search, a tactic commonly noted in aboveground foragers. The potential trade-offs between construction methods that minimize energy cost and those that minimize vulnerability to predators are discussed.

  17. Probability Theory

    NASA Astrophysics Data System (ADS)

    Jaynes, E. T.; Bretthorst, G. Larry

    2003-04-01

    Foreword; Preface; Part I. Principles and Elementary Applications: 1. Plausible reasoning; 2. The quantitative rules; 3. Elementary sampling theory; 4. Elementary hypothesis testing; 5. Queer uses for probability theory; 6. Elementary parameter estimation; 7. The central, Gaussian or normal distribution; 8. Sufficiency, ancillarity, and all that; 9. Repetitive experiments, probability and frequency; 10. Physics of 'random experiments'; Part II. Advanced Applications: 11. Discrete prior probabilities, the entropy principle; 12. Ignorance priors and transformation groups; 13. Decision theory: historical background; 14. Simple applications of decision theory; 15. Paradoxes of probability theory; 16. Orthodox methods: historical background; 17. Principles and pathology of orthodox statistics; 18. The Ap distribution and rule of succession; 19. Physical measurements; 20. Model comparison; 21. Outliers and robustness; 22. Introduction to communication theory; References; Appendix A. Other approaches to probability theory; Appendix B. Mathematical formalities and style; Appendix C. Convolutions and cumulants.

  18. Analytical error analysis of Clifford gates by the fault-path tracer method

    NASA Astrophysics Data System (ADS)

    Janardan, Smitha; Tomita, Yu; Gutiérrez, Mauricio; Brown, Kenneth R.

    2016-08-01

    We estimate the success probability of quantum protocols composed of Clifford operations in the presence of Pauli errors. Our method is derived from the fault-point formalism previously used to determine the success rate of low-distance error correction codes. Here we apply it to a wider range of quantum protocols and identify circuit structures that allow for efficient calculation of the exact success probability and even the final distribution of output states. As examples, we apply our method to the Bernstein-Vazirani algorithm and the Steane [[7,1,3

  19. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method

    PubMed Central

    Alani, Amir M.; Faramarzi, Asaad

    2015-01-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  20. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  1. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  2. Probability estimation with machine learning methods for dichotomous and multicategory outcome: applications.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Diener, Hans-Christian; Holste, Theresa; Weimar, Christian; König, Inke R; Ziegler, Andreas

    2014-07-01

    Machine learning methods are applied to three different large datasets, all dealing with probability estimation problems for dichotomous or multicategory data. Specifically, we investigate k-nearest neighbors, bagged nearest neighbors, random forests for probability estimation trees, and support vector machines with the kernels of Bessel, linear, Laplacian, and radial basis type. Comparisons are made with logistic regression. The dataset from the German Stroke Study Collaboration with dichotomous and three-category outcome variables allows, in particular, for temporal and external validation. The other two datasets are freely available from the UCI learning repository and provide dichotomous outcome variables. One of them, the Cleveland Clinic Foundation Heart Disease dataset, uses data from one clinic for training and from three clinics for external validation, while the other, the thyroid disease dataset, allows for temporal validation by separating data into training and test data by date of recruitment into study. For dichotomous outcome variables, we use receiver operating characteristics, areas under the curve values with bootstrapped 95% confidence intervals, and Hosmer-Lemeshow-type figures as comparison criteria. For dichotomous and multicategory outcomes, we calculated bootstrap Brier scores with 95% confidence intervals and also compared them through bootstrapping. In a supplement, we provide R code for performing the analyses and for random forest analyses in Random Jungle, version 2.1.0. The learning machines show promising performance over all constructed models. They are simple to apply and serve as an alternative approach to logistic or multinomial logistic regression analysis.

  3. Inverse probability weighting in STI/HIV prevention research: methods for evaluating social and community interventions

    PubMed Central

    Lippman, Sheri A.; Shade, Starley B.; Hubbard, Alan E.

    2011-01-01

    Background Intervention effects estimated from non-randomized intervention studies are plagued by biases, yet social or structural intervention studies are rarely randomized. There are underutilized statistical methods available to mitigate biases due to self-selection, missing data, and confounding in longitudinal, observational data permitting estimation of causal effects. We demonstrate the use of Inverse Probability Weighting (IPW) to evaluate the effect of participating in a combined clinical and social STI/HIV prevention intervention on reduction of incident chlamydia and gonorrhea infections among sex workers in Brazil. Methods We demonstrate the step-by-step use of IPW, including presentation of the theoretical background, data set up, model selection for weighting, application of weights, estimation of effects using varied modeling procedures, and discussion of assumptions for use of IPW. Results 420 sex workers contributed data on 840 incident chlamydia and gonorrhea infections. Participators were compared to non-participators following application of inverse probability weights to correct for differences in covariate patterns between exposed and unexposed participants and between those who remained in the intervention and those who were lost-to-follow-up. Estimators using four model selection procedures provided estimates of intervention effect between odds ratio (OR) .43 (95% CI:.22-.85) and .53 (95% CI:.26-1.1). Conclusions After correcting for selection bias, loss-to-follow-up, and confounding, our analysis suggests a protective effect of participating in the Encontros intervention. Evaluations of behavioral, social, and multi-level interventions to prevent STI can benefit by introduction of weighting methods such as IPW. PMID:20375927

  4. Improving Salmonella determination in Sinaloa rivers with ultrafiltration and most probable number methods.

    PubMed

    Jimenez, Maribel; Chaidez, Cristobal

    2012-07-01

    Monitoring of waterborne pathogens is improved by using concentration methods prior to detection; however, direct microbial enumeration is desired to study microbial ecology and human health risks. The aim of this work was to determine Salmonella presence in river water with an ultrafiltration system coupled with the ISO 6579:1993 isolation standard method (UFS-ISO). Most probable number (MPN) method was used directly in water samples to estimate Salmonella populations. Additionally, the effect between Salmonella determination and water turbidity was evaluated. Ten liters or three tenfold dilutions (1, 0.1, and 0.01 mL) of water were processed for Salmonella detection and estimation by the UFS-ISO and MPN methods, respectively. A total of 84 water samples were tested, and Salmonella was confirmed in 64/84 (76%) and 38/84 (44%) when UFS-ISO and MPN were used, respectively. Salmonella populations were less than 5 × 10(3) MPN/L in 73/84 of samples evaluated (87%), and only three (3.5%) showed contamination with numbers greater than 4.5 × 10(4) MPN/L. Water turbidity did not affect Salmonella determination regardless of the performed method. These findings suggest that Salmonella abundance in Sinaloa rivers is not a health risk for human infections in spite of its persistence. Thus, choosing the appropriate strategy to study Salmonella in river water samples is necessary to clarify its behavior and transport in the environment.

  5. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  6. Preparing Future Scholars for Academia and Beyond: A Mixed Method Investigation of Doctoral Students' Preparedness for Multiple Career Paths

    ERIC Educational Resources Information Center

    Cason, Jennifer

    2016-01-01

    This action research study is a mixed methods investigation of doctoral students' preparedness for multiple career paths. PhD students face two challenges preparing for multiple career paths: lack of preparation and limited engagement in conversations about the value of their research across multiple audiences. This study focuses on PhD students'…

  7. Nuclear spin selection rules for reactive collision systems by the spin-modification probability method.

    PubMed

    Park, Kisam; Light, John C

    2007-12-14

    The spin-modification probability (SMP) method, which provides fundamental and detailed quantitative information on the nuclear spin selection rules, is discussed more systematically and generalized for reactive collision systems involving more than one configuration of reactant and product molecules, explicitly taking account of the conservation of the overall nuclear spin symmetry as well as the conservation of the total nuclear spin angular momentum, under the assumption of no nuclear hyperfine interaction. The values of SMP once calculated can be used for any system of identical nuclei of any spin as long as the system has the corresponding nuclear spin symmetry. The values of SMP calculated for simple systems can also be used for more complex systems containing several kinds of identical nuclei or various isotopomers. The generalized formulation of statistical scattering theory which can easily represent various rearrangement mechanisms is also presented.

  8. On peculiarities of the method for determining the probability of lightning striking terrestrial explosive objects

    NASA Astrophysics Data System (ADS)

    Gundareva, S. V.; Kalugina, I. E.; Temnikov, A. G.

    2016-10-01

    We have described a new probabilistic method for calculating and assessing lightning striking terrestrial explosive objects using a combined criterion for the emergence of upward streamer and leader discharges from the elements of the object being protected and lightning rods taking into account the probabilistic nature of the avalanche-streamer and streamer-leader transitions, the trajectories of a downward stepped lightning leader and lightning current. It has been shown that the disregard of possible formation of uncompleted streamer discharges from the elements of the object in the electric field of a downward lightning leader, which can ignite explosive emission, decreases the rated probability of the object being damaged by a lightning stroke by several times.

  9. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    PubMed

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  10. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method

    PubMed Central

    Fogel, Allison R.; Rosenberg, Jason C.; Lehman, Frank M.; Kuperberg, Gina R.; Patel, Aniruddh D.

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5–9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such ‘authentic cadence’ melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of

  11. A chain-of-states acceleration method for the efficient location of minimum energy paths

    SciTech Connect

    Hernández, E. R. Herrero, C. P.; Soler, J. M.

    2015-11-14

    We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.

  12. A Meta-Path-Based Prediction Method for Human miRNA-Target Association

    PubMed Central

    Huang, Cong; Ding, Pingjian

    2016-01-01

    MicroRNAs (miRNAs) are short noncoding RNAs that play important roles in regulating gene expressing, and the perturbed miRNAs are often associated with development and tumorigenesis as they have effects on their target mRNA. Predicting potential miRNA-target associations from multiple types of genomic data is a considerable problem in the bioinformatics research. However, most of the existing methods did not fully use the experimentally validated miRNA-mRNA interactions. Here, we developed RMLM and RMLMSe to predict the relationship between miRNAs and their targets. RMLM and RMLMSe are global approaches as they can reconstruct the missing associations for all the miRNA-target simultaneously and RMLMSe demonstrates that the integration of sequence information can improve the performance of RMLM. In RMLM, we use RM measure to evaluate different relatedness between miRNA and its target based on different meta-paths; logistic regression and MLE method are employed to estimate the weight of different meta-paths. In RMLMSe, sequence information is utilized to improve the performance of RMLM. Here, we carry on fivefold cross validation and pathway enrichment analysis to prove the performance of our methods. The fivefold experiments show that our methods have higher AUC scores compared with other methods and the integration of sequence information can improve the performance of miRNA-target association prediction. PMID:27703979

  13. Partially coherent scattering in stellar chromospheres. II - The first-order escape probability method. III - A second-order escape probability method

    NASA Technical Reports Server (NTRS)

    Gayley, K. G.

    1992-01-01

    Approximate analytic expressions are derived for resonance-line wing diagnostics, accounting for frequency redistribution effects, for homogeneous slabs, and slabs with a constant Planck function gradient. Resonance-line emission profiles from a simplified conceptual standpoint are described in order to elucidate the basic physical parameters of the line-forming layers prior to the performance of detailed numerical calculations. An approximate analytic expression is derived for the dependence on stellar surface gravity of the location of the Ca II and Mg II resonance-line profile peaks. An approximate radiative transfer equation using generalized second-order escape probabilities, applicable even in the presence of nearly coherent scattering in the damping wings of resonance lines, is derived. Approximate analytic solutions that can be applied in special regimes and achieve good agreement with accurate numerical results are found.

  14. Country 'choices' or deforestation paths: a method for global change analysis of human-forest interactions.

    PubMed

    Koop, G; Tole, L

    2001-10-01

    Data used in quantitative studies of global tropical deforestation are typically of poor quality. These studies use either cross-sectional or panel data to measure the contribution of social and land use factors to forest decline world wide. However, there are pitfalls in the use of either type of data. Panel data studies treat each year's observation as a distinct, reliable, data point, when a careful examination of the data reveals this assumption to be implausible. In contrast, cross-sectional studies discard most of the time series information in the data, calculating a single average deforestation rate for each country. In this paper, we argue for a middle road between these two approaches: one that does not treat the time series information as completely reliable but does not disregard it altogether. Using a well-known global forest data set (FAO's Production Series Yearbooks), we argue that the most the data can reliably tell us is whether a country's deforestation rate falls into one of four categories or country 'path choices'. We then use the data categorised in this way in a small empirical investigation of the socio-economic causes of deforestation. This multinomial logit framework allows for the determination of the influence of independent variables on the probability that a country will follow one deforestation path vs. another. Results from the logit analysis of key social and land use indicators chosen for their importance in the literature in driving deforestation suggest that the effect of these variables will differ for countries depending on the particular set of deforestation trajectories in question.

  15. Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li

    2012-01-01

    To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.

  16. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  17. Determining optical path difference with a frequency-modulated continuous-wave method.

    PubMed

    Song, Ningfang; Lu, Xiangxiang; Li, Wei; Li, Yang; Wang, Yingying; Liu, Jixun; Xu, Xiaobin; Pan, Xiong

    2015-08-01

    A technique for determining the optical path difference (OPD) between two Raman beams using a frequency-modulated continuous-wave method is investigated. This approach greatly facilitates the measurement and adjustment of the OPD when tuning the OPD is essential to minimize the effects of the diode laser's phase noise on Raman lasers. As a demonstration, the frequencies of the beat note with different OPDs are characterized and analyzed. When the measured beat frequency is 0.367 Hz, the OPD between Raman beams is zero. The phase noise of the Raman laser system after implementation of zeroing of the OPD is also measured.

  18. Quantum free-energy differences from nonequilibrium path integrals. I. Methods and numerical application.

    PubMed

    van Zon, Ramses; Hernández de la Peña, Lisandro; Peslherbe, Gilles H; Schofield, Jeremy

    2008-10-01

    In this paper, the imaginary-time path-integral representation of the canonical partition function of a quantum system and nonequilibrium work fluctuation relations are combined to yield methods for computing free-energy differences in quantum systems using nonequilibrium processes. The path-integral representation is isomorphic to the configurational partition function of a classical field theory, to which a natural but fictitious Hamiltonian dynamics is associated. It is shown that if this system is prepared in an equilibrium state, after which a control parameter in the fictitious Hamiltonian is changed in a finite time, then formally the Jarzynski nonequilibrium work relation and the Crooks fluctuation relation hold, where work is defined as the change in the energy as given by the fictitious Hamiltonian. Since the energy diverges for the classical field theory in canonical equilibrium, two regularization methods are introduced which limit the number of degrees of freedom to be finite. The numerical applicability of the methods is demonstrated for a quartic double-well potential with varying asymmetry. A general parameter-free smoothing procedure for the work distribution functions is useful in this context.

  19. Measurement method for the transition width of precision approach path indicator based on spectral means

    NASA Astrophysics Data System (ADS)

    Shen, Haiping; Zhou, Xiaoli; Zhang, Wanlu; Pan, Jiangen; Liu, Muqing

    2012-10-01

    This paper introduces a new colorimetric measurement method for the transition width of the precision approach path indicator. The measurement system consists of a spectrometer, a fiber probe, a moving means and a ruler. The spectrometer is used to measure the chromaticity coordinates to distinguish the white and red light. The fiber probe is the input of the spectrometer. It is fixed to the moving means, which can move along with the upright rule. The precision approach path indicator certain distance away projects the light to the fiber probe. By moving the fiber probe crossing the transition sector up and down, the chromaticity coordinate of the light moves from the white area to the red area. The intermediate distance of the fiber probe is the width of the transition sector. Use the ruler to measure it and then calculate it to angle. With the measurement distance of 10 meter and the precision of the ruler 1 millimeter, the precision of the system can be 21 seconds of arc. Compared with the traditional measurement methods, the method introduced in this paper is more precise and it strictly accords with the ICAO standard Annex 14.

  20. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  1. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  2. Using computerized tomography to determine ionospheric structures. Part 2, A method using curved paths to increase vertical resolution

    SciTech Connect

    Vittitoe, C.N.

    1993-08-01

    A method is presented to unfold the two-dimensional vertical structure in electron density by using data on the total electron content for a series of paths through the ionosphere. The method uses a set of orthonormal basis functions to represent the vertical structure and takes advantage of curved paths and the eikonical equation to reduce the number of iterations required for a solution. Curved paths allow a more thorough probing of the ionosphere with a given set of transmitter and receiver positions. The approach can be directly extended to more complex geometries.

  3. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    NASA Astrophysics Data System (ADS)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  4. Systems and methods for managing shared-path instrumentation and irradiation targets in a nuclear reactor

    DOEpatents

    Heinold, Mark R.; Berger, John F.; Loper, Milton H.; Runkle, Gary A.

    2015-12-29

    Systems and methods permit discriminate access to nuclear reactors. Systems provide penetration pathways to irradiation target loading and offloading systems, instrumentation systems, and other external systems at desired times, while limiting such access during undesired times. Systems use selection mechanisms that can be strategically positioned for space sharing to connect only desired systems to a reactor. Selection mechanisms include distinct paths, forks, diverters, turntables, and other types of selectors. Management methods with such systems permits use of the nuclear reactor and penetration pathways between different systems and functions, simultaneously and at only distinct desired times. Existing TIP drives and other known instrumentation and plant systems are useable with access management systems and methods, which can be used in any nuclear plant with access restrictions.

  5. Self-referenced method for optical path difference calibration in low-coherence interferometry.

    PubMed

    Laubscher, M; Froehly, L; Karamata, B; Salathé, R P; Lasser, T

    2003-12-15

    A simple method for the calibration of optical path difference modulation in low-coherence interferometry is presented. Spectrally filtering a part of the detected interference signal results in a high-coherence signal that encodes the scan imperfections and permits their correction. The method is self-referenced in the sense that no secondary high-coherence light source is necessary. Using a spectrometer setup for spectral filtering allows for flexibility in both the choice of calibration wavelength and the maximum scan range. To demonstrate the method's usefulness, it is combined with a recently published digital spectral shaping technique to measure the thickness of a pellicle beam splitter with a white-light source.

  6. Coupled-cluster method: A lattice-path-based subsystem approximation scheme for quantum lattice models

    SciTech Connect

    Bishop, R. F.; Li, P. H. Y.

    2011-04-15

    An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1/2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.

  7. Evaluation of the ERIC-PCR as a probable method to differentiate Avibacterium paragallinarum serovars.

    PubMed

    Hellmuth, Julius Eduard; Hitzeroth, Arina Corli; Bragg, Robert Richard; Boucher, Charlotte Enastacia

    2016-11-21

    Infectious coryza an upper respiratory tract-disease in chickens, caused by Avibacterium paragallinarum, leads to huge economic losses. The disease is controlled through vaccination, but vaccination efficacy is dependent on correct identification of the infecting serovar, as limited cross-protection is reported amongst some serovars. Current identification methods include the heamagglutination inhibition (HI) test, which is demanding, and could be subjective. To overcome this, molecular typing methods proposed are the Multiplex PCR and Restriction Fragment Length Polymorphism (RFLP) PCR, but low reproducibility is reported. Enterobacterial Repetitive Intergenic Consensus (ERIC) PCR has been suggested for molecular groupings of various bacterial species. This study focuses on evaluating the ERIC-PCR as probable method to differentiate between different Av. paragallinarum serovars by grouping with reference isolates, based on clonal relations. The ERIC-PCR was performed on 12 reference isolates and 41 field isolates originating from South Africa and South America. The data indicates that the ERIC-PCR is not ideal for the differentiation nor molecular typing of Av. paragallinarum serovars, as no correlation is drawn upon comparison of banding patterns of field isolates and reference strains. However, the results do indicate isolates from the same origin sharing unique banding patterns, indicating potential clonal relationship, but when compared to the reference isolates dominant in the specific area, no correlation could be drawn. Furthermore, although the ERIC-PCR serves a purpose in epidemiological studies, it has proved to have little application in differentiating amongst serovars of Av. paragallinarum and to group untyped field strains with known reference strains.

  8. METAPHOR: a machine-learning-based method for the probability density estimation of photometric redshifts

    NASA Astrophysics Data System (ADS)

    Cavuoti, S.; Amaro, V.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-02-01

    A variety of fundamental astrophysical science topics require the determination of very accurate photometric redshifts (photo-z). A wide plethora of methods have been developed, based either on template models fitting or on empirical explorations of the photometric parameter space. Machine-learning-based techniques are not explicitly dependent on the physical priors and able to produce accurate photo-z estimations within the photometric ranges derived from the spectroscopic training set. These estimates, however, are not easy to characterize in terms of a photo-z probability density function (PDF), due to the fact that the analytical relation mapping the photometric parameters on to the redshift space is virtually unknown. We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method designed to provide a reliable PDF of the error distribution for empirical techniques. The method is implemented as a modular workflow, whose internal engine for photo-z estimation makes use of the MLPQNA neural network (Multi Layer Perceptron with Quasi Newton learning rule), with the possibility to easily replace the specific machine-learning model chosen to predict photo-z. We present a summary of results on SDSS-DR9 galaxy data, used also to perform a direct comparison with PDFs obtained by the LE PHARE spectral energy distribution template fitting. We show that METAPHOR is capable to estimate the precision and reliability of photometric redshifts obtained with three different self-adaptive techniques, i.e. MLPQNA, Random Forest and the standard K-Nearest Neighbors models.

  9. Using Multiple Methods to teach ASTR 101 students the Path of the Sun and Shadows

    NASA Astrophysics Data System (ADS)

    D'Cruz, Noella L.

    2015-01-01

    It seems surprising that non-science major introductory astronomy students find the daily path of the Sun and shadows created by the Sun challenging to learn even though both can be easily observed (provided students do not look directly at the Sun). In order for our students to master the relevant concepts, we have usually used lecture, a lecture tutorial (from Prather, et al.) followed by think-pair-share questions, a planetarium presentation and an animation from the Nebraska Astronomy Applet Project to teach these topics. We cover these topics in a lecture-only, one semester introductory astronomy course at Joliet Junior College. Feedback from our Spring 2014 students indicated that the planetarium presentation was the most helpful in learning the path of the Sun while none of the four teaching methods was helpful when learning about shadows cast by the Sun. Our students did not find the lecture tutorial to be much help even though such tutorials have been proven to promote deep conceptual change. In Fall 2014, we continued to use these four methods, but we modified how we teach both topics so our students could gain more from the tutorial. We hoped our modifications would cause students to have a better overall grasp of the concepts. After our regular lecture, we gave a shorter than usual planetarium presentation on the path of the Sun and we asked students to work through a shadow activity from Project Astro materials. Then students completed the lecture tutorial and some think-pair-share questions. After this, we asked students to predict the Sun's path on certain days of the year and we used the planetarium projector to show them how well their predictions matched up. We ended our coverage of these topics by asking students a few more think-pair-share questions. In our poster, we will present our approach to teaching these topics in Fall 2014, how our Fall 2014 students feel about our teaching strategies and how they fared on related test questions.

  10. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  11. A probability density function method for detecting atrial fibrillation using R-R intervals.

    PubMed

    Hong-Wei, Lu; Ying, Sun; Min, Lin; Pi-Ding, Li; Zheng, Zheng

    2009-01-01

    A probability density function (PDF) method is proposed for investigating the structure of the reconstructed attractor of R-R intervals. By constructing the PDF of distance between two points in the reconstructed phase space of R-R intervals of normal sinus rhythm (NSR) and atrial fibrillation (AF), it is found that the distributions of PDF of NSR and AF R-R intervals have significant differences. By taking advantage of their differences, a characteristic parameter k(n), which represents the sum of n points slope in filtered PDF curve, is put forward to detect both 400 segments of NSR and AF R-R intervals from the MIT-BIH Atrial Fibrillation database. Parameters such as number of R-R intervals, number of embedding dimensions and slope are optimized for the best detection performance. Results demonstrate that the new algorithm has a fast response speed with R-R intervals as short as 40, and shows a sensitivity of 0.978, and a specificity of 0.990 in the best detecting performance.

  12. New method for estimating greenhouse gas emissions from livestock buildings using open-path FTIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Briz, Susana; Barrancos, José; Nolasco, Dácil; Melián, Gladys; Padrón, Eleazar; Pérez, Nemesio

    2009-09-01

    It is widely known that methane, together with carbon dioxide, is one of the most effective greenhouse gases contributing to climate global change. According to EMEP/CORINAIR Emission Inventory Guidebook1, around 25% of global CH4 emissions originate from animal husbandry, especially from enteric fermentation. However, uncertainties in the CH4 emission factors provided by EMEP/CORINAIR are around 30%. For this reason, works addressed to calculate emissions experimentally are so important to improve the estimations of emissions due to livestock and to calculate emission factors not included in this inventory. FTIR spectroscopy has been frequently used in different methodologies to measure emission rates in many environmental problems. Some of these methods are based on dispersion modelling techniques, wind data, micrometeorological measurements or the release of a tracer gas. In this work, a new method for calculating emission rates from livestock buildings applying Open-Path FTIR spectroscopy is proposed. This method is inspired by the accumulation chamber method used for CO2 flux measurements in volcanic areas or CH4 flux in wetlands and aquatic ecosystems. The process is the following: livestock is outside the building, which is ventilated in order to reduce concentrations to ambient level. Once the livestock has been put inside, the building is completely closed and the concentrations of gases emitted by livestock begin to increase. The Open-Path system measures the concentration evolution of gases such as CO2, CH4, NH3 and H2O. The slope of the concentration evolution function, dC/dt, at initial time is directly proportional to the flux of the corresponding gas. This method has been applied in a cow shed in the surroundings of La Laguna, Tenerife Island, Spain). As expected, evolutions of gas concentrations reveal that the livestock building behaves like an accumulation chamber. Preliminary results show that the CH4 emission factor is lower than the proposed by

  13. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  14. Extended Mermin Method for Calculating the Electron Inelastic Mean Free Path

    NASA Astrophysics Data System (ADS)

    Da, B.; Shinotsuka, H.; Yoshikawa, H.; Ding, Z. J.; Tanuma, S.

    2014-08-01

    We propose an improved method for calculating electron inelastic mean free paths (IMFPs) in solids from experimental energy-loss functions based on the Mermin dielectric function. The "extended Mermin" method employs a nonlimited number of Mermin oscillators and allows negative oscillators to take into account not only electronic transitions, as is common in the traditional approaches, but also infrared transitions and inner shell electron excitations. The use of only Mermin oscillators naturally preserves two important sum rules when extending to infinite momentum transfer. Excellent agreement is found between calculated IMFPs for Cu and experimental measurements from elastic peak electron spectroscopy. Notably improved fits to the IMFPs derived from analyses of x-ray absorption fine structure measurements for Cu and Mo illustrate the importance of the contribution of infrared transitions in IMFP calculations at low energies.

  15. The Anderson--Baird-Parker direct plating method versus the most probable number procedure for enumerating Escherichia coli in meats.

    PubMed

    Rayman, M K; Aris, B

    1981-01-01

    Comparison of the Anderson--Baird-Parker direct plating method (DP) and the North American most probable number procedure (MPN) for enumerating Escherichia coli in frozen meats revealed that the DP method is more precise and yields higher counts of E. coli than the MPN procedure. Any of three brands of membrane filters tested was suitable for use in the DP method.

  16. New method for path-length equalization of long single-mode fibers for interferometry

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Monnier, J. D.; Ozdowy, K.; Woillez, J.; Perrin, G.

    2014-07-01

    The ability to use single mode (SM) fibers for beam transport in optical interferometry offers practical advantages over conventional long vacuum pipes. One challenge facing fiber transport is maintaining constant differential path length in an environment where environmental thermal variations can lead to cm-level variations from day to night. We have fabricated three composite cables of length 470 m, each containing 4 copper wires and 3 SM fibers that operate at the astronomical H band (1500-1800 nm). Multiple fibers allow us to test performance of a circular core fiber (SMF28), a panda-style polarization-maintaining (PM) fiber, and a lastly a specialty dispersion-compensated PM fiber. We will present experimental results using precision electrical resistance measurements of the of a composite cable beam transport system. We find that the application of 1200 W over a 470 m cable causes the optical path difference in air to change by 75 mm (+/- 2 mm) and the resistance to change from 5.36 to 5.50Ω. Additionally, we show control of the dispersion of 470 m of fiber in a single polarization using white light interference fringes (λc=1575 nm, Δλ=75 nm) using our method.

  17. Observations of cloud liquid water path over oceans: Optical and microwave remote sensing methods

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Rossow, William B.

    1994-01-01

    Published estimates of cloud liquid water path (LWP) from satellite-measured microwave radiation show little agreement, even about the relative magnitudes of LWP in the tropics and midlatitudes. To understand these differences and to obtain more reliable estimate, optical and microwave LWP retrieval methods are compared using the International Satellite Cloud Climatology Project (ISCCP) and special sensor microwave/imager (SSM/I) data. Errors in microwave LWP retrieval associated with uncertainties in surface, atmosphere, and cloud properties are assessed. Sea surface temperature may not produce great LWP errors, if accurate contemporaneous measurements are used in the retrieval. An uncertainty of estimated near-surface wind speed as high as 2 m/s produces uncertainty in LWP of about 5 mg/sq cm. Cloud liquid water temperature has only a small effect on LWP retrievals (rms errors less than 2 mg/sq cm), if errors in the temperature are less than 5 C; however, such errors can produce spurious variations of LWP with latitude and season. Errors in atmospheric column water vapor (CWV) are strongly coupled with errors in LWP (for some retrieval methods) causing errors as large as 30 mg/sq cm. Because microwave radiation is much less sensitive to clouds with small LWP (less than 7 mg/sq cm) than visible wavelength radiation, the microwave results are very sensitive to the process used to separate clear and cloudy conditions. Different cloud detection sensitivities in different microwave retrieval methods bias estimated LWP values. Comparing ISCCP and SSM/I LWPs, we find that the two estimated values are consistent in global, zonal, and regional means for warm, nonprecipitating clouds, which have average LWP values of about 5 mg/sq cm and occur much more frequently than precipitating clouds. Ice water path (IWP) can be roughly estimated from the differences between ISCCP total water path and SSM/I LWP for cold, nonprecipitating clouds. IWP in the winter hemisphere is about

  18. A reliable acoustic path: Physical properties and a source localization method

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo

    2012-12-01

    The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.

  19. Application of LAMBDA Method to the Calculation of Slant Path Wet Vapor Content of GPS Signals

    NASA Astrophysics Data System (ADS)

    Huang, Shan-Qi; Wang, Jie-Xian; Wang, Xiao-Ya; Chen, Jun-Ping

    2009-10-01

    With the improvement of the GPS data processing techniques and calculating accuracy, the GPS has been increasingly and widely applied to atmospheric science. In the research on GPS meteorology the slant path wet vapor content (SWV) is one of the significant parameters. In the light of the problem of poorer real time, which existed in the method proposed by Song Shuli et al. in 2004, for directly calculating the SWV by means of the precise ephemeris, IGS clock error and observed value of the LC combination after the cycle skip processing, the LAMBDA method which has more mature application to the city virtual reference station (VRS) is applied to the problem of the processing of ambiguity search. Through the trial calculation of data, it is tested and verified that the method is feasible and there is a better uniformity when the calculated result is projected into the zenith direction. The atmospheric delay in the vertical direction obtained by using this method is compared with the result of the GAMIT or the BERNESE, with the result showing that the accuracy of the coincidence of the result of the method with that of the BERNESE is generally smaller than 1.5 cm and the accuracy of the coincidence of the result of the method with that of the GAMIT is generally smaller than 10 cm.

  20. A probability evaluation method of early deterioration condition for the critical components of wind turbine generator systems

    NASA Astrophysics Data System (ADS)

    Hu, Yaogang; Li, Hui; Liao, Xinglin; Song, Erbing; Liu, Haitao; Chen, Z.

    2016-08-01

    This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration-based condition monitoring methods. Considering the its thermal inertia and strong anti-interference capacity, temperature characteristic parameters as a deterioration indication cannot be adequately disturbed by the uncontrollable noise and uncertainty nature of wind. This paper provides a probability evaluation method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.

  1. Unsteady panel method for flows with multiple bodies moving along various paths

    NASA Technical Reports Server (NTRS)

    Richason, Thomas F.; Katz, Joseph; Ashby, Dale L.

    1993-01-01

    A potential flow based three-dimensional panel method was modified to treat time dependent conditions in which several submerged bodies can move within the fluid along different trajectories. This modification was accomplished by formulating the momentary solution in an inertial frame-of-reference, attached to the undisturbed stationary fluid. Consequently, the numerical interpretation of the multiple-body, solid-surface boundary condition and the viscous wake rollup was considerably simplified. The unsteady capability of this code was validated by comparing computed and experimental results for a finite wing undergoing pitch oscillations. In order to demonstrate the multicomponent capability, computations were made for two wings following closely intersecting paths (e.g., to avoid mid air collisions) and for a flow field with relative rotation (e.g., helicopter-rotor/fuselage interaction). Results were compared to experimental data when such data was available.

  2. Mapping variable ring polymer molecular dynamics: A path-integral based method for nonadiabatic processes

    NASA Astrophysics Data System (ADS)

    Ananth, Nandini

    2013-09-01

    We introduce mapping-variable ring polymer molecular dynamics (MV-RPMD), a model dynamics for the direct simulation of multi-electron processes. An extension of the RPMD idea, this method is based on an exact, imaginary time path-integral representation of the quantum Boltzmann operator using continuous Cartesian variables for both electronic states and nuclear degrees of freedom. We demonstrate the accuracy of the MV-RPMD approach in calculations of real-time, thermal correlation functions for a range of two-state single-mode model systems with different coupling strengths and asymmetries. Further, we show that the ensemble of classical trajectories employed in these simulations preserves the Boltzmann distribution and provides a direct probe into real-time coupling between electronic state transitions and nuclear dynamics.

  3. Method- and species-specific detection probabilities of fish occupancy in Arctic lakes: Implications for design and management

    USGS Publications Warehouse

    Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.

    2013-01-01

    Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.

  4. Indicator and probability kriging methods for delineating Cu, Fe, and Mn contamination in groundwater of Najafgarh Block, Delhi, India.

    PubMed

    Adhikary, Partha Pratim; Dash, Ch Jyotiprava; Bej, Renukabala; Chandrasekharan, H

    2011-05-01

    Two non-parametric kriging methods such as indicator kriging and probability kriging were compared and used to estimate the probability of concentrations of Cu, Fe, and Mn higher than a threshold value in groundwater. In indicator kriging, experimental semivariogram values were fitted well in spherical model for Fe and Mn. Exponential model was found to be best for all the metals in probability kriging and for Cu in indicator kriging. The probability maps of all the metals exhibited an increasing risk of pollution over the entire study area. Probability kriging estimator incorporates the information about order relations which the indicator kriging does not, has improved the accuracy of estimating the probability of metal concentrations in groundwater being higher than a threshold value. Evaluation of these two spatial interpolation methods through mean error (ME), mean square error (MSE), kriged reduced mean error (KRME), and kriged reduced mean square error (KRMSE) showed 3.52% better performance of probability kriging over indicator kriging. The combined result of these two kriging method indicated that on an average 26.34%, 65.36%, and 99.55% area for Cu, Fe, and Mn, respectively, are coming under the risk zone with probability of exceedance from a cutoff value is 0.6 or more. The groundwater quality map pictorially represents groundwater zones as "desirable" or "undesirable" for drinking. Thus the geostatistical approach is very much helpful for the planners and decision makers to devise policy guidelines for efficient management of the groundwater resources so as to enhance groundwater recharge and minimize the pollution level.

  5. Comparison of micrometeorological methods using open-path optical instruments for measuring methane emission from agricultural sites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we evaluated the accuracies of two relatively new micrometeorological methods using open-path tunable diode laser absorption spectrometers: vertical radial plume mapping method (US EPA OTM-10) and the backward Lagragian stochastic method (Wintrax®). We have evaluated the accuracy of t...

  6. Delivery Path Length and Holding Tree Minimization Method of Securities Delivery among the Registration Agencies Connected as Non-Tree

    NASA Astrophysics Data System (ADS)

    Shimamura, Atsushi; Moritsu, Toshiyuki; Someya, Harushi

    To dematerialize the securities such as stocks or cooporate bonds, the securities were registered to account in the registration agencies which were connected as tree. This tree structure had the advantage in the management of the securities those were issued large amount and number of brands of securities were limited. But when the securities such as account receivables or advance notes are dematerialized, number of brands of the securities increases extremely. In this case, the management of securities with tree structure becomes very difficult because of the concentration of information to root of the tree. To resolve this problem, using the graph structure is assumed instead of the tree structure. When the securities are kept with tree structure, the delivery path of securities is unique, but when securities are kept with graph structure, path of delivery is not unique. In this report, we describe the requirement of the delivery path of securities, and we describe selecting method of the path.

  7. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  8. Estimating the Probability of Being the Best System: A Generalized Method and Nonparametric Hypothesis Test

    DTIC Science & Technology

    2013-03-01

    Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology Air...University Air Education and Training Command In Partial Fulfillment of the Requirements for the Degree of Master of Science in Operations ...to estimate these unknown multinomial success probabilities, , for each of the systems [17]. Bechhofer and Sobel [18] made use of multinomial

  9. Path durations for use in the stochastic‐method simulation of ground motions

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.

    2014-01-01

    The stochastic method of ground‐motion simulation assumes that the energy in a target spectrum is spread over a duration DT. DT is generally decomposed into the duration due to source effects (DS) and to path effects (DP). For the most commonly used source, seismological theory directly relates DS to the source corner frequency, accounting for the magnitude scaling of DT. In contrast, DP is related to propagation effects that are more difficult to represent by analytic equations based on the physics of the process. We are primarily motivated to revisit DT because the function currently employed by many implementations of the stochastic method for active tectonic regions underpredicts observed durations, leading to an overprediction of ground motions for a given target spectrum. Further, there is some inconsistency in the literature regarding which empirical duration corresponds to DT. Thus, we begin by clarifying the relationship between empirical durations and DT as used in the first author’s implementation of the stochastic method, and then we develop a new DP relationship. The new DP function gives significantly longer durations than in the previous DP function, but the relative contribution of DP to DT still diminishes with increasing magnitude. Thus, this correction is more important for small events or subfaults of larger events modeled with the stochastic finite‐fault method.

  10. A new method of reconstructing current paths in HTS tapes with defects

    NASA Astrophysics Data System (ADS)

    Podlivaev, Alexey; Rudnev, Igor

    2017-03-01

    We propose a new method for calculating current paths in high-temperature superconductive (HTS) tapes with various defects including cracks, non-superconducting inclusions, and superconducting inclusions with lower local critical current density. The calculation method is based on a model of critical state which takes into account the dependence of the critical current on the magnetic field. The method allows us to calculate the spatial distribution of currents flowing through the defective HTS tape for both currents induced by the external magnetic field and transport currents from an external source. For both cases, we performed simulations of the current distributions in these tapes with different types of defects and have shown that the combination of the action of the magnetic field and transport current leads to a more detailed identification of the boundaries and shape of the defects. The proposed method is adapted for calculating modern superconductors in real superconducting devices and may be more useful as compared to the conventional magnetometric diagnostic studies, when the tape is affected by the magnetic field only.

  11. A routing path construction method for key dissemination messages in sensor networks.

    PubMed

    Moon, Soo Young; Cho, Tae Ho

    2014-01-01

    Authentication is an important security mechanism for detecting forged messages in a sensor network. Each cluster head (CH) in dynamic key distribution schemes forwards a key dissemination message that contains encrypted authentication keys within its cluster to next-hop nodes for the purpose of authentication. The forwarding path of the key dissemination message strongly affects the number of nodes to which the authentication keys in the message are actually distributed. We propose a routing method for the key dissemination messages to increase the number of nodes that obtain the authentication keys. In the proposed method, each node selects next-hop nodes to which the key dissemination message will be forwarded based on secret key indexes, the distance to the sink node, and the energy consumption of its neighbor nodes. The experimental results show that the proposed method can increase by 50-70% the number of nodes to which authentication keys in each cluster are distributed compared to geographic and energy-aware routing (GEAR). In addition, the proposed method can detect false reports earlier by using the distributed authentication keys, and it consumes less energy than GEAR when the false traffic ratio (FTR) is ≥ 10%.

  12. New Method for the Characterization of 3D Preferential Flow Paths at the Field

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Preferential flow paths development in the field is the result of the complex interaction of multiple processes relating to the soil's structure, moisture condition, stress level, and biological activities. Visualizing and characterizing the cracking behavior and preferential paths evolution with so...

  13. Approximation of Integrals via Monte Carlo Methods, With an Applications to Calculating Radar Detection Probabilities

    DTIC Science & Technology

    2005-03-01

    the areas of target radar cross section, digital signal processing, inverse synthetic aperature radar and radar detec- tion using both software...Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross Kyprianou Electronic Warfare and Radar Division Systems Sciences...Beta functions. A significant ap- plication, in the context of radar detection theory, is based upon the work of [Shnidman 1998]. The latter considers

  14. Creation of a global land cover and a probability map through a new map integration method

    NASA Astrophysics Data System (ADS)

    Kinoshita, Tsuguki; Iwao, Koki; Yamagata, Yoshiki

    2014-05-01

    Global land cover maps are widely used for assessment and in research of various kinds, and in recent years have also come to be used for socio-economic forecasting. However, existing maps are not very accurate, and differences between maps also contribute to their unreliability. Improving the accuracy of global land cover maps would benefit a number of research fields. In this paper, we propose a methodology for using ground truth data to integrate existing global land cover maps. We checked the accuracy of a map created using this methodology and found that the accuracy of the new map is 74.6%, which is 3% higher than for existing maps. We then created a 0.5-min latitude by 0.5-min longitude probability map. This map indicates the probability of agreement between the category class of the new map and truth data. Using the map, we found that the probabilities of cropland and grassland are relatively low compared with other land cover types. This appears to be because the definitions of cropland differ between maps, so the accuracy may be improved by including pasture and idle plot categories.

  15. The use of the stationary phase method as a mathematical tool to determine the path of optical beams

    NASA Astrophysics Data System (ADS)

    Carvalho, Silvânia A.; De Leo, Stefano

    2015-03-01

    We use the stationary phase method to determine the paths of optical beams that propagate through a dielectric block. In the presence of partial internal reflection, we recover the geometrical result obtained by using Snell's law. For total internal reflection, the stationary phase method overreaches Snell's law, predicting the Goos-Hänchen shift.

  16. Enchanced accuracy of coliform testing in seawater by a modification of the most-probable-number method.

    PubMed Central

    Olson, B H

    1978-01-01

    A 1-year study of marine water sample from six beach locations showed that the most-probable-number method failed to recover significant numbers of coli-forms. Modifying this method by transferring, after 48 h, presumptive negatives (growth and no gas production) to confirmed and fecal coliform media significantly improved recovery. Tests which were presumptive negative but confirmed as fecal coliform positive were designated as false negatives. Most-probable-number method false negatives occurred throughout the year, with 143 of 270 samples collected producing false negatives. More than 50% of fecal coliform false-negative isolates were Escherichia coli. Inclusion of false-negative tubes into the coliform most-probable-number method data resulted in increased violation of the California ocean water contact sports standard at all sites. More than 20% of the samples collected were in violation of this standard. These data indicate that modification of the most-probable-number method increases detection of coliform numbers in the marine environment. PMID:365107

  17. Methodologies and Comparisons for Lund's Two Methods for Calculating Probability of Cloud-Free Line-of-Sight.

    NASA Astrophysics Data System (ADS)

    Yu, Shawn; Case, Kenneth E.; Chernick, Julian

    1986-03-01

    To help in the implementation of Lund's probability of cloud-free line-of-sight (PCFLOS) calculations (method A and method B) for limited altitudes, a methodology for cumulative cloud cover calculation (required for both methods) is introduced and a methodology for cumulative cloud form determination (required for method B) is developed. To study the PCFLOS differences between the two methods, Lund's master matrices are investigated and the derived PCFLOS results of Hamburg, Germany, are compared and analyzed for variations in selected environmental parameters. Based upon numerical studies performed in this research effort, it is strongly recommended that Lund's method B should always be adopted for general purpose worldwide PCFLOS calculations.

  18. Simulation of thermal ionization in a dense helium plasma by the Feynman path integral method

    NASA Astrophysics Data System (ADS)

    Shevkunov, S. V.

    2011-04-01

    The region of equilibrium states is studied where the quantum nature of the electron component and a strong nonideality of a plasma play a key role. The problem of negative signs in the calculation of equilibrium averages a system of indistinguishable quantum particles with a spin is solved in the macroscopic limit. It is demonstrated that the calculation can be conducted up to a numerical result. The complete set of symmetrized basis wave functions is constructed based on the Young symmetry operators. The combinatorial weight coefficients of the states corresponding to different graphs of connected Feynman paths in multiparticle systems are calculated by the method of random walk over permutation classes. The kinetic energy is calculated using a viral estimator at a finite pressure in a statistical ensemble with flexible boundaries. Based on the methods developed in the paper, the computer simulation is performed for a dense helium plasma in the temperature range from 30000 to 40000 K. The equation of state, internal energy, ionization degree, and structural characteristic of the plasma are calculated in terms of spatial correlation functions. The parameters of a pseudopotential plasma model are estimated.

  19. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  20. The method of joint probability distribution functions applied to the one-wavelength anomalous-scattering (OAS) case.

    PubMed

    Giacovazzo, C; Siliqi, D

    2001-01-01

    The method of the joint probability distribution function is applied to the case in which the positions of the anomalous scatterers are fully or partially known. The mathematical technique is able to handle errors both in the model structure of the located anomalous scatterers and in measurements. A criterion for ranking the more accurate phase estimates is given.

  1. Spline Histogram Method for Reconstruction of Probability Density Functions of Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Docenko, Dmitrijs; Berzins, Karlis

    We describe the spline histogram algorithm which is useful for visualization of the probability density function setting up a statistical hypothesis for a test. The spline histogram is constructed from discrete data measurements using tensioned cubic spline interpolation of the cumulative distribution function which is then differentiated and smoothed using the Savitzky-Golay filter. The optimal width of the filter is determined by minimization of the Integrated Square Error function. The current distribution of the TCSplin algorithm written in f77 with IDL and Gnuplot visualization scripts is available from www.virac.lv/en/soft.html.

  2. Nonparametric maximum likelihood estimation of probability densities by penalty function methods

    NASA Technical Reports Server (NTRS)

    Demontricher, G. F.; Tapia, R. A.; Thompson, J. R.

    1974-01-01

    When it is known a priori exactly to which finite dimensional manifold the probability density function gives rise to a set of samples, the parametric maximum likelihood estimation procedure leads to poor estimates and is unstable; while the nonparametric maximum likelihood procedure is undefined. A very general theory of maximum penalized likelihood estimation which should avoid many of these difficulties is presented. It is demonstrated that each reproducing kernel Hilbert space leads, in a very natural way, to a maximum penalized likelihood estimator and that a well-known class of reproducing kernel Hilbert spaces gives polynomial splines as the nonparametric maximum penalized likelihood estimates.

  3. A DIRECT METHOD TO DETERMINE THE PARALLEL MEAN FREE PATH OF SOLAR ENERGETIC PARTICLES WITH ADIABATIC FOCUSING

    SciTech Connect

    He, H.-Q.; Wan, W. E-mail: wanw@mail.iggcas.ac.cn

    2012-03-01

    The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

  4. Burnup calculation by the method of first-flight collision probabilities using average chords prior to the first collision

    NASA Astrophysics Data System (ADS)

    Karpushkin, T. Yu.

    2012-12-01

    A technique to calculate the burnup of materials of cells and fuel assemblies using the matrices of first-flight neutron collision probabilities rebuilt at a given burnup step is presented. A method to rebuild and correct first collision probability matrices using average chords prior to the first neutron collision, which are calculated with the help of geometric modules of constructed stochastic neutron trajectories, is described. Results of calculation of the infinite multiplication factor for elementary cells with a modified material composition compared to the reference one as well as calculation of material burnup in the cells and fuel assemblies of a VVER-1000 are presented.

  5. New method for the characterization of three-dimensional preferential flow paths in the field

    NASA Astrophysics Data System (ADS)

    Abou Najm, Majdi R.; Jabro, Jalal D.; Iversen, William M.; Mohtar, Rabi H.; Evans, Robert G.

    2010-02-01

    Preferential flow path development in the field is the result of the complex interaction of multiple processes relating to the soil's structure, moisture condition, stress level, and biological activity. Visualizing and characterizing the cracking behavior and preferential paths evolution with soil depth has always been a key challenge and a major barrier against scaling up existing hydrologic concepts and models to account for preferential flows. This paper presents a new methodology to quantify soil preferential paths in the field using liquid latex. The evolution of the preferential flow paths at different soil depths and moisture conditions is assessed. Results from different soil series (Savage clay loam soil versus Chalmers clay loam) and different vegetation covers and soil managements (corn/tilled field versus soybean no-till field in the Chalmers soil series) are presented.

  6. International Diffusion of Open Path FTIR Technology and Air Monitoring Methods: Taiwan (Republic of China).

    PubMed

    Giese-Bogdan, Stefan It; Levine, Steven P

    1996-08-01

    International cooperation and diffusion of environmental technologies is a central goal of the U.S. EPA Environmental Technology Initiative, and is of great interest to many countries. One objective is to exchange knowledge and skills concerning new monitoring technologies. In this case, the technology was open path Fourier Transform Infrared Spectrometry (op-FTIR). Taiwan is a high-technology, newly industrialized country. Because of air pollution problems, it is interested in obtaining skills, knowledge, and instrumentation for monitoring air pollutants. In April 1994, the Industrial Technology Research Institute, Center for Industrial Safety and Health Technology (ITRI/CISH) in Hsinchu, Taiwan, requested intensive training in op-FTIR. Training was held between September 30,1994 and October 29,1994. During the stay, the instructor provided intensive training on op-FTIR theory as well as an introduction to available instrumentation and software. The training concluded with a field demonstration of the instrumentation in a manufacturing facility. This report gives an overview of the training methods, structure, and materials in the op-FTIR training course. It will also address various problems encountered while teaching this course. In addition, the potential use for this technology in industry as well as by the Taiwanese government will be explained.

  7. A general parallelization strategy for random path based geostatistical simulation methods

    NASA Astrophysics Data System (ADS)

    Mariethoz, Grégoire

    2010-07-01

    The size of simulation grids used for numerical models has increased by many orders of magnitude in the past years, and this trend is likely to continue. Efficient pixel-based geostatistical simulation algorithms have been developed, but for very large grids and complex spatial models, the computational burden remains heavy. As cluster computers become widely available, using parallel strategies is a natural step for increasing the usable grid size and the complexity of the models. These strategies must profit from of the possibilities offered by machines with a large number of processors. On such machines, the bottleneck is often the communication time between processors. We present a strategy distributing grid nodes among all available processors while minimizing communication and latency times. It consists in centralizing the simulation on a master processor that calls other slave processors as if they were functions simulating one node every time. The key is to decouple the sending and the receiving operations to avoid synchronization. Centralization allows having a conflict management system ensuring that nodes being simulated simultaneously do not interfere in terms of neighborhood. The strategy is computationally efficient and is versatile enough to be applicable to all random path based simulation methods.

  8. Unsteady panel method for flows with multiple bodies moving along various paths

    NASA Technical Reports Server (NTRS)

    Richardson, Thomas F.; Katz, Joseph; Ashby, Dale L.

    1994-01-01

    A potential flow based three-dimensional panel method was modified to treat time-dependent conditions in which several submerged bodies can move within the fluid along different trajectories. This modification was accomplished by formulating the momentary solution in an inertial frame of reference, attached to the undisturbed stationary fluid. Consequently, the numerical interpretation of the multiple-body, solid-surface boundary condition and the viscous wake rollup was considerably simplified. The usteady capability of this code was calibrated and validated by comparing computed results with closed-form analytical results available for an airfoil, which was impulsively set into a constant speed forward motion. To demonstrate the multicomponent capability, computations were made for two wings following closely intersecting paths (i.e., simulations aimed at avoiding mid-air collisions) and for a flowfield with relative rotation (i.e., the case of a helicopter rotor rotating relative to the fuselage). Computed results for the cases were compared to experimental data, when such data was available.

  9. [Use of nonparametric methods in medicine. V. A probability test using iteration].

    PubMed

    Gerylovová, A; Holcík, J

    1990-10-01

    The authors give an account of the so-called Wald-Wolfowitz test of iteration of two types of elements by means of which it is possible to test the probability of the pattern of two types of elements. To facilitate the application of the test five percent critical values are given for the number of iterations for left-sided, right-sided and bilateral alternative hypotheses. The authors present also tables of critical values for up and down iterations which are obtained when we replace the originally assessed sequence of observations by a sequence +1 and -1, depending on the sign of the consecutive differences. The application of the above tests is illustrated on examples.

  10. A Didactic Proposed for Teaching the Concepts of Electrons and Light in Secondary School Using Feynman's Path Sum Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Arlego, Marcelo; Otero, Maria Rita

    2012-01-01

    This work comprises an investigation about basic Quantum Mechanics (QM) teaching in the high school. The organization of the concepts does not follow a historical line. The Path Integrals method of Feynman has been adopted as a Reference Conceptual Structure that is an alternative to the canonical formalism. We have designed a didactic sequence…

  11. A novel path generation method of onsite 5-axis surface inspection using the dual-cubic NURBS representation

    NASA Astrophysics Data System (ADS)

    Li, Wen-long; Wang, Gang; Zhang, Gang; Pang, Chang-tao; Yin, Zhou-pin

    2016-09-01

    Onsite surface inspection with a touch probe or a laser scanner is a promising technique for efficiently evaluating surface profile error. The existing work of 5-axis inspection path generation bears a serious drawback, however, as there is a drastic orientation change of the inspection axis. Such a sudden change may exceed the stringent physical limit on the speed and acceleration of the rotary motions of the machine tool. In this paper, we propose a novel path generation method for onsite 5-axis surface inspection. The accessibility cones are defined and used to generate alternative interference-free inspection directions. Then, the control points are optimally calculated to obtain the dual-cubic non-Uniform rational B-splines (NURBS) curves, which respectively determine the path points and the axis vectors in an inspection path. The generated inspection path is smooth and non-interference, which deals with the ‘mutation and shake’ problems and guarantees a stable speed and acceleration of machine tool rotary motions. Its feasibility and validity is verified by the onsite inspection experiments of impeller blade.

  12. Projectile Two-dimensional Coordinate Measurement Method Based on Optical Fiber Coding Fire and its Coordinate Distribution Probability

    NASA Astrophysics Data System (ADS)

    Li, Hanshan; Lei, Zhiyong

    2013-01-01

    To improve projectile coordinate measurement precision in fire measurement system, this paper introduces the optical fiber coding fire measurement method and principle, sets up their measurement model, and analyzes coordinate errors by using the differential method. To study the projectile coordinate position distribution, using the mathematical statistics hypothesis method to analyze their distributing law, firing dispersion and probability of projectile shooting the object center were put under study. The results show that exponential distribution testing is relatively reasonable to ensure projectile position distribution on the given significance level. Through experimentation and calculation, the optical fiber coding fire measurement method is scientific and feasible, which can gain accurate projectile coordinate position.

  13. Superpositions of probability distributions

    NASA Astrophysics Data System (ADS)

    Jizba, Petr; Kleinert, Hagen

    2008-09-01

    Probability distributions which can be obtained from superpositions of Gaussian distributions of different variances v=σ2 play a favored role in quantum theory and financial markets. Such superpositions need not necessarily obey the Chapman-Kolmogorov semigroup relation for Markovian processes because they may introduce memory effects. We derive the general form of the smearing distributions in v which do not destroy the semigroup property. The smearing technique has two immediate applications. It permits simplifying the system of Kramers-Moyal equations for smeared and unsmeared conditional probabilities, and can be conveniently implemented in the path integral calculus. In many cases, the superposition of path integrals can be evaluated much easier than the initial path integral. Three simple examples are presented, and it is shown how the technique is extended to quantum mechanics.

  14. Points on the Path to Probability.

    ERIC Educational Resources Information Center

    Kiernan, James F.

    2001-01-01

    Presents the problem of points and the development of the binomial triangle, or Pascal's triangle. Examines various attempts to solve this problem to give students insight into the nature of mathematical discovery. (KHR)

  15. Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression

    NASA Astrophysics Data System (ADS)

    Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.

    2013-10-01

    Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.

  16. Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kollár, Martin

    2012-05-01

    In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (ie WCDMA and LTE).

  17. An empirical method for estimating probability density functions of gridded daily minimum and maximum temperature

    NASA Astrophysics Data System (ADS)

    Lussana, C.

    2013-04-01

    The presented work focuses on the investigation of gridded daily minimum (TN) and maximum (TX) temperature probability density functions (PDFs) with the intent of both characterising a region and detecting extreme values. The empirical PDFs estimation procedure has been realised using the most recent years of gridded temperature analysis fields available at ARPA Lombardia, in Northern Italy. The spatial interpolation is based on an implementation of Optimal Interpolation using observations from a dense surface network of automated weather stations. An effort has been made to identify both the time period and the spatial areas with a stable data density otherwise the elaboration could be influenced by the unsettled station distribution. The PDF used in this study is based on the Gaussian distribution, nevertheless it is designed to have an asymmetrical (skewed) shape in order to enable distinction between warming and cooling events. Once properly defined the occurrence of extreme events, it is possible to straightforwardly deliver to the users the information on a local-scale in a concise way, such as: TX extremely cold/hot or TN extremely cold/hot.

  18. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  19. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  20. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  1. Improved constrained optimization method for reaction-path determination in the generalized hybrid orbital quantum mechanical/molecular mechanical calculations

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Re, Suyong; Sugita, Yuji; Ten-no, Seiichiro

    2013-01-01

    The nudged elastic band (NEB) and string methods are widely used to obtain the reaction path of chemical reactions and phase transitions. In these methods, however, it is difficult to define an accurate Lagrangian to generate the conservative forces. On the other hand, the constrained optimization with locally updated planes (CO-LUP) scheme defines target function properly and suitable for micro-iteration optimizations in quantum mechanical/molecular mechanical (QM/MM) systems, which uses the efficient second order QM optimization. However, the method does have problems of inaccurate estimation of reactions and inappropriate accumulation of images around the energy minimum. We introduce three modifications into the CO-LUP scheme to overcome these problems: (1) An improved tangent estimation of the reaction path, which is used in the NEB method, (2) redistribution of images using an energy-weighted interpolation before updating local tangents, and (3) reduction of the number of constraints, in particular translation/rotation constraints, for improved convergence. First, we test the method on the isomerization of alanine dipeptide without QM/MM calculation, showing that the method is comparable to the string method both in accuracy and efficiency. Next, we apply the method for defining the reaction paths of the rearrangement reaction catalyzed by chorismate mutase (CM) and of the phosphoryl transfer reaction catalyzed by cAMP-dependent protein kinase (PKA) using generalized hybrid orbital QM/MM calculations. The reaction energy barrier of CM is in high agreement with the experimental value. The path of PKA reveals that the enzyme reaction is associative and there is a late transfer of the substrate proton to Asp 166, which is in agreement with the recently published result using the NEB method.

  2. An Efficient Method to Calculate the Failure Rate of Dynamic Systems with Random Parameters using the Total Probability Theorem

    DTIC Science & Technology

    2015-05-12

    using a vibratory system . Our approach can be easily extended to non-stationary Gaussian input processes. Introduction The response of a vibratory...Page 1 of 9 15IDM-0105 An Efficient Method to Calculate the Failure Rate of Dynamic Systems with Random Parameters using the Total Probability...failure rate of a linear vibratory system with random parameters excited by stationary Gaussian processes. The response of such a system is non

  3. Prediction of rockburst probability given seismic energy and factors defined by the expert method of hazard evaluation (MRG)

    NASA Astrophysics Data System (ADS)

    Kornowski, Jerzy; Kurzeja, Joanna

    2012-04-01

    In this paper we suggest that conditional estimator/predictor of rockburst probability (and rockburst hazard, P T(t)) can be approximated with the formula P T(t) = P 1(θ 1)…P N(θ N)·P dynT(t), where P dynT(t) is a time-dependent probability of rockburst given only the predicted seismic energy parameters, while P i(θ i) are amplifying coefficients due to local geologic and mining conditions, as defined by the Expert Method of (rockburst) Hazard Evaluation (MRG) known in the Polish mining industry. All the elements of the formula are (approximately) calculable (on-line) and the resulting P T value satisfies inequalities 0 ≤ P T(t) ≤ 1. As a result, the hazard space (0-1) can be always divided into smaller subspaces (e.g., 0-10-5, 10-5-10-4, 10-4-10-3, 10-3-1), possibly named with symbols (e.g., A, B, C, D, …) called "hazard states" — which saves the prediction users from worrying of probabilities. The estimator P T can be interpreted as a formal statement of (reformulated) Comprehensive Method of Rockburst State of Hazard Evaluation, well known in Polish mining industry. The estimator P T is natural, logically consistent and physically interpretable. Due to full formalization, it can be easily generalized, incorporating relevant information from other sources/methods.

  4. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley P.

    2004-01-01

    Propulsion ground test facilities face the daily challenges of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Due to budgetary and schedule constraints, NASA and industry customers are pushing to test more components, for less money, in a shorter period of time. As these new rocket engine component test programs are undertaken, the lack of technology maturity in the test articles, combined with pushing the test facilities capabilities to their limits, tends to lead to an increase in facility breakdowns and unsuccessful tests. Over the last five years Stennis Space Center's propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and broken numerous test facility and test article parts. While various initiatives have been implemented to provide better propulsion test techniques and improve the quality, reliability, and maintainability of goods and parts used in the propulsion test facilities, unexpected failures during testing still occur quite regularly due to the harsh environment in which the propulsion test facilities operate. Previous attempts at modeling the lifecycle of a propulsion component test project have met with little success. Each of the attempts suffered form incomplete or inconsistent data on which to base the models. By focusing on the actual test phase of the tests project rather than the formulation, design or construction phases of the test project, the quality and quantity of available data increases dramatically. A logistic regression model has been developed form the data collected over the last five years, allowing the probability of successfully completing a rocket propulsion component test to be calculated. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),..,X(sub k) to a binary or dichotomous

  5. An improved hypergeometric probability method for identification of functionally linked proteins using phylogenetic profiles.

    PubMed

    Kotaru, Appala Raju; Shameer, Khader; Sundaramurthy, Pandurangan; Joshi, Ramesh Chandra

    2013-01-01

    Predicting functions of proteins and alternatively spliced isoforms encoded in a genome is one of the important applications of bioinformatics in the post-genome era. Due to the practical limitation of experimental characterization of all proteins encoded in a genome using biochemical studies, bioinformatics methods provide powerful tools for function annotation and prediction. These methods also help minimize the growing sequence-to-function gap. Phylogenetic profiling is a bioinformatics approach to identify the influence of a trait across species and can be employed to infer the evolutionary history of proteins encoded in genomes. Here we propose an improved phylogenetic profile-based method which considers the co-evolution of the reference genome to derive the basic similarity measure, the background phylogeny of target genomes for profile generation and assigning weights to target genomes. The ordering of genomes and the runs of consecutive matches between the proteins were used to define phylogenetic relationships in the approach. We used Escherichia coli K12 genome as the reference genome and its 4195 proteins were used in the current analysis. We compared our approach with two existing methods and our initial results show that the predictions have outperformed two of the existing approaches. In addition, we have validated our method using a targeted protein-protein interaction network derived from protein-protein interaction database STRING. Our preliminary results indicates that improvement in function prediction can be attained by using coevolution-based similarity measures and the runs on to the same scale instead of computing them in different scales. Our method can be applied at the whole-genome level for annotating hypothetical proteins from prokaryotic genomes.

  6. An improved hypergeometric probability method for identification of functionally linked proteins using phylogenetic profiles

    PubMed Central

    Kotaru, Appala Raju; Shameer, Khader; Sundaramurthy, Pandurangan; Joshi, Ramesh Chandra

    2013-01-01

    Predicting functions of proteins and alternatively spliced isoforms encoded in a genome is one of the important applications of bioinformatics in the post-genome era. Due to the practical limitation of experimental characterization of all proteins encoded in a genome using biochemical studies, bioinformatics methods provide powerful tools for function annotation and prediction. These methods also help minimize the growing sequence-to-function gap. Phylogenetic profiling is a bioinformatics approach to identify the influence of a trait across species and can be employed to infer the evolutionary history of proteins encoded in genomes. Here we propose an improved phylogenetic profile-based method which considers the co-evolution of the reference genome to derive the basic similarity measure, the background phylogeny of target genomes for profile generation and assigning weights to target genomes. The ordering of genomes and the runs of consecutive matches between the proteins were used to define phylogenetic relationships in the approach. We used Escherichia coli K12 genome as the reference genome and its 4195 proteins were used in the current analysis. We compared our approach with two existing methods and our initial results show that the predictions have outperformed two of the existing approaches. In addition, we have validated our method using a targeted protein-protein interaction network derived from protein-protein interaction database STRING. Our preliminary results indicates that improvement in function prediction can be attained by using coevolution-based similarity measures and the runs on to the same scale instead of computing them in different scales. Our method can be applied at the whole-genome level for annotating hypothetical proteins from prokaryotic genomes. PMID:23750082

  7. A Variational Approach to Path Planning in Three Dimensions Using Level Set Methods

    DTIC Science & Technology

    2004-12-08

    planning. IEEE Transactions on Robotics and Automa- tion, 14(1), 1998. [15] L. Kavraki, P. Svestka, J. Latombe, and M. Overmars. Probabilistic roadmaps...for path planning in high dimensional configuration spaces. IEEE Transactions on Robotics and Automation, 12(4), 1996. [16] Ron Kimmel and James A

  8. A double-index method to classify Kuroshio intrusion paths in the Luzon Strait

    NASA Astrophysics Data System (ADS)

    Huang, Zhida; Liu, Hailong; Hu, Jianyu; Lin, Pengfei

    2016-06-01

    A double index (DI), which is made up of two sub-indices, is proposed to describe the spatial patterns of the Kuroshio intrusion and mesoscale eddies west to the Luzon Strait, based on satellite altimeter data. The area-integrated negative and positive geostrophic vorticities are defined as the Kuroshio warm eddy index (KWI) and the Kuroshio cold eddy index (KCI), respectively. Three typical spatial patterns are identified by the DI: the Kuroshio warm eddy path (KWEP), the Kuroshio cold eddy path (KCEP), and the leaking path. The primary features of the DI and three patterns are further investigated and compared with previous indices. The effects of the integrated area and the algorithm of the integration are investigated in detail. In general, the DI can overcome the problem of previously used indices in which the positive and negative geostrophic vorticities cancel each other out. Thus, the proportions of missing and misjudged events are greatly reduced using the DI. The DI, as compared with previously used indices, can better distinguish the paths of the Kuroshio intrusion and can be used for further research.

  9. Method and apparatus for producing an aircraft flare path control signal

    NASA Technical Reports Server (NTRS)

    Lambregts, Antonius A. (Inventor); Hansen, Rolf (Inventor)

    1982-01-01

    Aircraft altitude, ground velocity, and altitude rate signals are input to a computer which, using a unique control law, generates a pitch control surface command signal suitable for guiding an aircraft on its flare path to a specified runway touchdown point despite varying wind conditions.

  10. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  11. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  12. Probability of identification (POI): a statistical model for the validation of qualitative botanical identification methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...

  13. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    DOE PAGES

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; ...

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challengemore » in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.« less

  14. An improved multilevel Monte Carlo method for estimating probability distribution functions in stochastic oil reservoir simulations

    SciTech Connect

    Lu, Dan; Zhang, Guannan; Webster, Clayton G.; Barbier, Charlotte N.

    2016-12-30

    In this paper, we develop an improved multilevel Monte Carlo (MLMC) method for estimating cumulative distribution functions (CDFs) of a quantity of interest, coming from numerical approximation of large-scale stochastic subsurface simulations. Compared with Monte Carlo (MC) methods, that require a significantly large number of high-fidelity model executions to achieve a prescribed accuracy when computing statistical expectations, MLMC methods were originally proposed to significantly reduce the computational cost with the use of multifidelity approximations. The improved performance of the MLMC methods depends strongly on the decay of the variance of the integrand as the level increases. However, the main challenge in estimating CDFs is that the integrand is a discontinuous indicator function whose variance decays slowly. To address this difficult task, we approximate the integrand using a smoothing function that accelerates the decay of the variance. In addition, we design a novel a posteriori optimization strategy to calibrate the smoothing function, so as to balance the computational gain and the approximation error. The combined proposed techniques are integrated into a very general and practical algorithm that can be applied to a wide range of subsurface problems for high-dimensional uncertainty quantification, such as a fine-grid oil reservoir model considered in this effort. The numerical results reveal that with the use of the calibrated smoothing function, the improved MLMC technique significantly reduces the computational complexity compared to the standard MC approach. Finally, we discuss several factors that affect the performance of the MLMC method and provide guidance for effective and efficient usage in practice.

  15. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-07

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  16. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-01

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  17. Application of maximum likelihood to direct methods: the probability density function of the triple-phase sums. XI.

    PubMed

    Rius, Jordi

    2006-09-01

    The maximum-likelihood method is applied to direct methods to derive a more general probability density function of the triple-phase sums which is capable of predicting negative values. This study also proves that maximization of the origin-free modulus sum function S yields, within the limitations imposed by the assumed approximations, the maximum-likelihood estimates of the phases. It thus represents the formal theoretical justification of the S function that was initially derived from Patterson-function arguments [Rius (1993). Acta Cryst. A49, 406-409].

  18. Most Probable Number Rapid Viability PCR Method to Detect Viable Spores of Bacillus anthracis in Swab Samples

    SciTech Connect

    Letant, S E; Kane, S R; Murphy, G A; Alfaro, T M; Hodges, L; Rose, L; Raber, E

    2008-05-30

    This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence of dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.

  19. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  20. EQUAL OPTICAL PATH BEAM SPLITTERS BY USE OF AMPLITUDE-SPLITTING AND WAVEFRONT-SPLITTING METHODS FOR PENCIL BEAM INTERFEROMETER.

    SciTech Connect

    QIAN,S.TAKACS,P.

    2003-08-03

    A beam splitter to create two separated parallel beams is a critical unit of a pencil beam interferometer, for example the long trace profiler (LTP). The operating principle of the beam splitter can be based upon either amplitude-splitting (AS) or wavefront-splitting (WS). For precision measurements with the LTP, an equal optical path system with two parallel beams is desired. Frequency drift of the light source in a non-equal optical path system will cause the interference fringes to drift. An equal optical path prism beam splitter with an amplitude-splitting (AS-EBS) beam splitter and a phase shift beam splitter with a wavefront-splitting (WS-PSBS) are introduced. These beam splitters are well suited to the stability requirement for a pencil beam interferometer due to the characteristics of monolithic structure and equal optical path. Several techniques to produce WS-PSBS by hand are presented. In addition, the WS-PSBS using double thin plates, made from microscope cover plates, has great advantages of economy, convenience, availability and ease of adjustment over other beam splitting methods. Comparison of stability measurements made with the AS-EBS, WS-PSBS, and other beam splitters is presented.

  1. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    SciTech Connect

    Schmidt, Matthew; Constable, Steve; Ing, Christopher; Roy, Pierre-Nicholas

    2014-06-21

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen for cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.

  2. Path integral methods for the dynamics of stochastic and disordered systems

    NASA Astrophysics Data System (ADS)

    Hertz, John A.; Roudi, Yasser; Sollich, Peter

    2017-01-01

    We review some of the techniques used to study the dynamics of disordered systems subject to both quenched and fast (thermal) noise. Starting from the Martin-Siggia-Rose/Janssen-De Dominicis-Peliti path integral formalism for a single variable stochastic dynamics, we provide a pedagogical survey of the perturbative, i.e. diagrammatic, approach to dynamics and how this formalism can be used for studying soft spin models. We review the supersymmetric formulation of the Langevin dynamics of these models and discuss the physical implications of the supersymmetry. We also describe the key steps involved in studying the disorder-averaged dynamics. Finally, we discuss the path integral approach for the case of hard Ising spins and review some recent developments in the dynamics of such kinetic Ising models.

  3. FicTrac: a visual method for tracking spherical motion and generating fictive animal paths.

    PubMed

    Moore, Richard J D; Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas; van Swinderen, Bruno; Srinivasan, Mandyam V

    2014-03-30

    Studying how animals interface with a virtual reality can further our understanding of how attention, learning and memory, sensory processing, and navigation are handled by the brain, at both the neurophysiological and behavioural levels. To this end, we have developed a novel vision-based tracking system, FicTrac (Fictive path Tracking software), for estimating the path an animal makes whilst rotating an air-supported sphere using only input from a standard camera and computer vision techniques. We have found that the accuracy and robustness of FicTrac outperforms a low-cost implementation of a standard optical mouse-based approach for generating fictive paths. FicTrac is simple to implement for a wide variety of experimental configurations and, importantly, is fast to execute, enabling real-time sensory feedback for behaving animals. We have used FicTrac to record the behaviour of tethered honeybees, Apis mellifera, whilst presenting visual stimuli in both open-loop and closed-loop experimental paradigms. We found that FicTrac could accurately register the fictive paths of bees as they walked towards bright green vertical bars presented on an LED arena. Using FicTrac, we have demonstrated closed-loop visual fixation in both the honeybee and the fruit fly, Drosophila melanogaster, establishing the flexibility of this system. FicTrac provides the experimenter with a simple yet adaptable system that can be combined with electrophysiological recording techniques to study the neural mechanisms of behaviour in a variety of organisms, including walking vertebrates.

  4. Mean-free-paths in concert and chamber music halls and the correct method for calibrating dodecahedral sound sources.

    PubMed

    Beranek, Leo L; Nishihara, Noriko

    2014-01-01

    The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method.

  5. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    NASA Technical Reports Server (NTRS)

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  6. Automatic estimation of sleep level for nap based on conditional probability of sleep stages and an exponential smoothing method.

    PubMed

    Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi

    2013-01-01

    An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.

  7. Follow-up: Prospective compound design using the 'SAR Matrix' method and matrix-derived conditional probabilities of activity.

    PubMed

    Gupta-Ostermann, Disha; Hirose, Yoichiro; Odagami, Takenao; Kouji, Hiroyuki; Bajorath, Jürgen

    2015-01-01

    In a previous Method Article, we have presented the 'Structure-Activity Relationship (SAR) Matrix' (SARM) approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a "chemical space envelope" around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach.

  8. Free energy of conformational transition paths in biomolecules: The string method and its application to myosin VI

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin; Vanden-Eijnden, Eric

    2011-01-01

    A set of techniques developed under the umbrella of the string method is used in combination with all-atom molecular dynamics simulations to analyze the conformation change between the prepowerstroke (PPS) and rigor (R) structures of the converter domain of myosin VI. The challenges specific to the application of these techniques to such a large and complex biomolecule are addressed in detail. These challenges include (i) identifying a proper set of collective variables to apply the string method, (ii) finding a suitable initial string, (iii) obtaining converged profiles of the free energy along the transition path, (iv) validating and interpreting the free energy profiles, and (v) computing the mean first passage time of the transition. A detailed description of the PPS↔R transition in the converter domain of myosin VI is obtained, including the transition path, the free energy along the path, and the rates of interconversion. The methodology developed here is expected to be useful more generally in studies of conformational transitions in complex biomolecules. PMID:21361558

  9. Path integral molecular dynamics method based on a pair density matrix approximation: An algorithm for distinguishable and identical particle systems

    NASA Astrophysics Data System (ADS)

    Miura, Shinichi; Okazaki, Susumu

    2001-09-01

    In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.

  10. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    NASA Astrophysics Data System (ADS)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  11. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  12. Easy transition path sampling methods: flexible-length aimless shooting and permutation shooting.

    PubMed

    Mullen, Ryan Gotchy; Shea, Joan-Emma; Peters, Baron

    2015-06-09

    We present new algorithms for conducting transition path sampling (TPS). Permutation shooting rigorously preserves the total energy and momentum of the initial trajectory and is simple to implement even for rigid water molecules. Versions of aimless shooting and permutation shooting that use flexible-length trajectories have simple acceptance criteria and are more computationally efficient than fixed-length versions. Flexible-length permutation shooting and inertial likelihood maximization are used to identify the reaction coordinate for vacancy migration in a two-dimensional trigonal crystal of Lennard-Jones particles. The optimized reaction coordinate eliminates nearly all recrossing of the transition state dividing surface.

  13. Urban stormwater capture curve using three-parameter mixed exponential probability density function and NRCS runoff curve number method.

    PubMed

    Kim, Sangdan; Han, Suhee

    2010-01-01

    Most related literature regarding designing urban non-point-source management systems assumes that precipitation event-depths follow the 1-parameter exponential probability density function to reduce the mathematical complexity of the derivation process. However, the method of expressing the rainfall is the most important factor for analyzing stormwater; thus, a better mathematical expression, which represents the probability distribution of rainfall depths, is suggested in this study. Also, the rainfall-runoff calculation procedure required for deriving a stormwater-capture curve is altered by the U.S. Natural Resources Conservation Service (Washington, D.C.) (NRCS) runoff curve number method to consider the nonlinearity of the rainfall-runoff relation and, at the same time, obtain a more verifiable and representative curve for design when applying it to urban drainage areas with complicated land-use characteristics, such as occurs in Korea. The result of developing the stormwater-capture curve from the rainfall data in Busan, Korea, confirms that the methodology suggested in this study provides a better solution than the pre-existing one.

  14. Path Finder

    SciTech Connect

    Rigdon, J. Brian; Smith, Marcus Daniel; Mulder, Samuel A

    2014-01-07

    PathFinder is a graph search program, traversing a directed cyclic graph to find pathways between labeled nodes. Searches for paths through ordered sequences of labels are termed signatures. Determining the presence of signatures within one or more graphs is the primary function of Path Finder. Path Finder can work in either batch mode or interactively with an analyst. Results are limited to Path Finder whether or not a given signature is present in the graph(s).

  15. Path integral method for predicting relative binding affinities of protein-ligand complexes

    PubMed Central

    Mulakala, Chandrika; Kaznessis, Yiannis N.

    2009-01-01

    We present a novel approach for computing biomolecular interaction binding affinities based on a simple path integral solution of the Fokker-Planck equation. Computing the free energy of protein-ligand interactions can expedite structure-based drug design. Traditionally, the problem is seen through the lens of statistical thermodynamics. The computations can become, however, prohibitively long for the change in the free energy upon binding to be determined accurately. In this work we present a different approach based on a stochastic kinetic formalism. Inspired by Feynman's path integral formulation, we extend the theory to classical interacting systems. The ligand is modeled as a Brownian particle subjected to the effective non-bonding interaction potential of the receptor. This allows the calculation of the relative binding affinities of interacting biomolecules in water to be computed as a function of the ligand's diffusivity and the curvature of the potential surface in the vicinity of the binding minimum. The calculation is thus exceedingly rapid. In test cases, the correlation coefficient between actual and computed free energies is >0.93 for accurate data-sets. PMID:19275144

  16. An experimental Method to Determine Photoelectron Partial Wave Probabilities and the Implications for Quantum Mechanically Complete Experiments

    NASA Astrophysics Data System (ADS)

    Yenen, Orhan

    2003-05-01

    Recent trends in AMO physics is to move from being a passive observer to an active controller of the outcome of quantum phenomena. Full controls of quantum processes require complete information about the quantum system; experiments which measure all the information allowed by quantum mechanics are called "Quantum Mechanically Complete Experiments". For example, when an isolated atom is photoionized, conservation laws limit the allowed partial waves of the photoelectron to a maximum of three. A quantum mechanically complete photoionization experiment then will have to determine all three partial wave probabilities and the two independent phases between the partial waves as a function of ionizing photon energy. From these five parameters all the quantities quantum mechanics allows one to measure can be determined for the "Residual Ion + Photoelectron" system. We have developed experimental methods [1, 2] to determine all three partial wave probabilities of photoelectrons when the residual ion is left in an excited state. Experimentally, Ar atoms are photoionized by circularly polarized synchrotron radiation produced by a unique VUV (vacuum ultraviolet) phase retarder we have installed at the Advanced Light Source (ALS) in Berkeley, CA. We measure the linear and circular polarization of the fine-structure-resolved fluorescent photons from the excited residual ions at specific directions. From the measurements one obtains the relativistic partial wave probabilities of the photoelectron. Our measurements highlight the significance of multielectron processes in photoionization dynamics and provide stringent tests of theory. The results indicate significant spin-dependent relativistic interactions during photoionization. [1] O. Yenen et al., Phys. Rev. Lett. 86, 979 (2001). [2] K. W. McLaughlin et al., Phys. Rev. Lett. 88, 123003 (2002).

  17. Response prediction techniques and case studies of a path blocking system based on Global Transmissibility Direct Transmissibility method

    NASA Astrophysics Data System (ADS)

    Wang, Zengwei; Zhu, Ping; Zhao, Jianxuan

    2017-02-01

    In this paper, the prediction capabilities of the Global Transmissibility Direct Transmissibility (GTDT) method are further developed. Two path blocking techniques solely using the easily measured variables of the original system to predict the response of a path blocking system are generalized to finite element models of continuous systems. The proposed techniques are derived theoretically in a general form for the scenarios of setting the response of a subsystem to zero and of removing the link between two directly connected subsystems. The objective of this paper is to verify the reliability of the proposed techniques by finite element simulations. Two typical cases, the structural vibration transmission case and the structure-borne sound case, in two different configurations are employed to illustrate the validity of proposed techniques. The points of attention for each case have been discussed, and conclusions are given. It is shown that for the two cases of blocking a subsystem the proposed techniques are able to predict the new response using measured variables of the original system, even though operational forces are unknown. For the structural vibration transmission case of removing a connector between two components, the proposed techniques are available only when the rotational component responses of the connector are very small. The proposed techniques offer relative path measures and provide an alternative way to deal with NVH problems. The work in this paper provides guidance and reference for the engineering application of the GTDT prediction techniques.

  18. Improved Most-Probable-Number Method To Detect Sulfate-Reducing Bacteria with Natural Media and a Radiotracer

    PubMed Central

    Vester, Flemming; Ingvorsen, Kjeld

    1998-01-01

    A greatly improved most-probable-number (MPN) method for selective enumeration of sulfate-reducing bacteria (SRB) is described. The method is based on the use of natural media and radiolabeled sulfate (35SO42−). The natural media used consisted of anaerobically prepared sterilized sludge or sediment slurries obtained from sampling sites. The densities of SRB in sediment samples from Kysing Fjord (Denmark) and activated sludge were determined by using a normal MPN (N-MPN) method with synthetic cultivation media and a tracer MPN (T-MPN) method with natural media. The T-MPN method with natural media always yielded significantly higher (100- to 1,000-fold-higher) MPN values than the N-MPN method with synthetic media. The recovery of SRB from environmental samples was investigated by simultaneously measuring sulfate reduction rates (by a 35S-radiotracer method) and bacterial counts by using the T-MPN and N-MPN methods, respectively. When bacterial numbers estimated by the T-MPN method with natural media were used, specific sulfate reduction rates (qSO42−) of 10−14 to 10−13 mol of SO42− cell−1 day−1 were calculated, which is within the range of qSO42− values previously reported for pure cultures of SRB (10−15 to 10−14 mol of SO42− cell−1 day−1). qSO42− values calculated from N-MPN values obtained with synthetic media were several orders of magnitude higher (2 × 10−10 to 7 × 10−10 mol of SO42− cell−1 day−1), showing that viable counts of SRB were seriously underestimated when standard enumeration media were used. Our results demonstrate that the use of natural media results in significant improvements in estimates of the true numbers of SRB in environmental samples. PMID:9572939

  19. Beam splitter and method for generating equal optical path length beams

    DOEpatents

    Qian, Shinan; Takacs, Peter

    2003-08-26

    The present invention is a beam splitter for splitting an incident beam into first and second beams so that the first and second beams have a fixed separation and are parallel upon exiting. The beam splitter includes a first prism, a second prism, and a film located between the prisms. The first prism is defined by a first thickness and a first perimeter which has a first major base. The second prism is defined by a second thickness and a second perimeter which has a second major base. The film is located between the first major base and the second major base for splitting the incident beam into the first and second beams. The first and second perimeters are right angle trapezoidal shaped. The beam splitter is configured for generating equal optical path length beams.

  20. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  1. Methods in probability and statistical inference. Final report, June 15, 1975-June 30, 1979. [Dept. of Statistics, Univ. of Chicago

    SciTech Connect

    Wallace, D L; Perlman, M D

    1980-06-01

    This report describes the research activities of the Department of Statistics, University of Chicago, during the period June 15, 1975 to July 30, 1979. Nine research projects are briefly described on the following subjects: statistical computing and approximation techniques in statistics; numerical computation of first passage distributions; probabilities of large deviations; combining independent tests of significance; small-sample efficiencies of tests and estimates; improved procedures for simultaneous estimation and testing of many correlations; statistical computing and improved regression methods; comparison of several populations; and unbiasedness in multivariate statistics. A description of the statistical consultation activities of the Department that are of interest to DOE, in particular, the scientific interactions between the Department and the scientists at Argonne National Laboratories, is given. A list of publications issued during the term of the contract is included.

  2. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  3. Location and release time identification of pollution point source in river networks based on the Backward Probability Method.

    PubMed

    Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal

    2016-09-15

    The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes.

  4. Lexicographic Probability, Conditional Probability, and Nonstandard Probability

    DTIC Science & Technology

    2009-11-11

    the following conditions: CP1. µ(U |U) = 1 if U ∈ F ′. CP2 . µ(V1 ∪ V2 |U) = µ(V1 |U) + µ(V2 |U) if V1 ∩ V2 = ∅, U ∈ F ′, and V1, V2 ∈ F . CP3. µ(V |U...µ(V |X)× µ(X |U) if V ⊆ X ⊆ U , U,X ∈ F ′, V ∈ F . Note that it follows from CP1 and CP2 that µ(· |U) is a probability measure on (W,F) (and, in... CP2 hold. This is easily seen to determine µ. Moreover, µ vaciously satisfies CP3, since there do not exist distinct sets U and X in F ′ such that U

  5. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    SciTech Connect

    Liu Di

    2008-10-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.

  6. Hamilton-Jacobi equation for the least-action/least-time dynamical path based on fast marching method

    NASA Astrophysics Data System (ADS)

    Dey, Bijoy K.; Janicki, Marek R.; Ayers, Paul W.

    2004-10-01

    Classical dynamics can be described with Newton's equation of motion or, totally equivalently, using the Hamilton-Jacobi equation. Here, the possibility of using the Hamilton-Jacobi equation to describe chemical reaction dynamics is explored. This requires an efficient computational approach for constructing the physically and chemically relevant solutions to the Hamilton-Jacobi equation; here we solve Hamilton-Jacobi equations on a Cartesian grid using Sethian's fast marching method [J. A. Sethian, Proc. Natl. Acad. Sci. USA 93, 1591 (1996)]. Using this method, we can—starting from an arbitrary initial conformation—find reaction paths that minimize the action or the time. The method is demonstrated by computing the mechanism for two different systems: a model system with four different stationary configurations and the H+H2→H2+H reaction. Least-time paths (termed brachistochrones in classical mechanics) seem to be a suitable chioce for the reaction coordinate, allowing one to determine the key intermediates and final product of a chemical reaction. For conservative systems the Hamilton-Jacobi equation does not depend on the time, so this approach may be useful for simulating systems where important motions occur on a variety of different time scales.

  7. Room Acoustical Simulation Algorithm Based on the Free Path Distribution

    NASA Astrophysics Data System (ADS)

    VORLÄNDER, M.

    2000-04-01

    A new algorithm is presented which provides estimates of impulse responses in rooms. It is applicable to arbitrary shaped rooms, thus including non-diffuse spaces like workrooms or offices. In the latter cases, for instance, sound propagation curves are of interest to be applied in noise control. In the case of concert halls and opera houses, the method enables very fast predictions of room acoustical criteria like reverberation time, strength or clarity. The method is based on a low-resolved ray tracing and recording of the free paths. Estimates of impulse responses are derived from evaluation of the free path distribution and of the free path transition probabilities.

  8. Computing the optimal path in stochastic dynamical systems

    NASA Astrophysics Data System (ADS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  9. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  10. A multiscale finite element model validation method of composite cable-stayed bridge based on Probability Box theory

    NASA Astrophysics Data System (ADS)

    Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan

    2016-05-01

    Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.

  11. SSME propellant path leak detection

    NASA Technical Reports Server (NTRS)

    Crawford, Roger; Shohadaee, Ahmad Ali

    1989-01-01

    The complicated high-pressure cycle of the space shuttle main engine (SSME) propellant path provides many opportunities for external propellant path leaks while the engine is running. This mode of engine failure may be detected and analyzed with sufficient speed to save critical engine test hardware from destruction. The leaks indicate hardware failures which will damage or destroy an engine if undetected; therefore, detection of both cryogenic and hot gas leaks is the objective of this investigation. The primary objective of this phase of the investigation is the experimental validation of techniques for detecting and analyzing propellant path external leaks which have a high probability of occurring on the SSME. The selection of candidate detection methods requires a good analytic model for leak plumes which would develop from external leaks and an understanding of radiation transfer through the leak plume. One advanced propellant path leak detection technique is obtained by using state-of-the-art technology infrared (IR) thermal imaging systems combined with computer, digital image processing, and expert systems for the engine protection. The feasibility of IR leak plume detection is evaluated on subscale simulated laboratory plumes to determine sensitivity, signal to noise, and general suitability for the application.

  12. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-08-04

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  13. Reliable and efficient reaction path and transition state finding for surface reactions with the growing string method.

    PubMed

    Jafari, Mina; Zimmerman, Paul M

    2017-04-15

    The computational challenge of fast and reliable transition state and reaction path optimization requires new methodological strategies to maintain low cost, high accuracy, and systematic searching capabilities. The growing string method using internal coordinates has proven to be highly effective for the study of molecular, gas phase reactions, but difficulties in choosing a suitable coordinate system for periodic systems has prevented its use for surface chemistry. New developments are therefore needed, and presented herein, to handle surface reactions which include atoms with large coordination numbers that cannot be treated using standard internal coordinates. The double-ended and single-ended growing string methods are implemented using a hybrid coordinate system, then benchmarked for a test set of 43 elementary reactions occurring on surfaces. These results show that the growing string method is at least 45% faster than the widely used climbing image-nudged elastic band method, which also fails to converge in several of the test cases. Additionally, the surface growing string method has a unique single-ended search method which can move outward from an initial structure to find the intermediates, transition states, and reaction paths simultaneously. This powerful explorative feature of single ended-growing string method is demonstrated to uncover, for the first time, the mechanism for atomic layer deposition of TiN on Cu(111) surface. This reaction is found to proceed through multiple hydrogen-transfer and ligand-exchange events, while formation of H-bonds stabilizes intermediates of the reaction. Purging gaseous products out of the reaction environment is the driving force for these reactions. © 2017 Wiley Periodicals, Inc.

  14. Tracking hurricane paths

    NASA Technical Reports Server (NTRS)

    Prabhakaran, Nagarajan; Rishe, Naphtali; Athauda, Rukshan

    1997-01-01

    The South East coastal region experiences hurricane threat for almost six months in every year. To improve the accuracy of hurricane forecasts, meteorologists would need the storm paths of both the present and the past. A hurricane path can be established if we could identify the correct position of the storm at different times right from its birth to the end. We propose a method based on both spatial and temporal image correlations to locate the position of a storm from satellite images. During the hurricane season, the satellite images of the Atlantic ocean near the equator are examined for the hurricane presence. This is accomplished in two steps. In the first step, only segments with more than a particular value of cloud cover are selected for analysis. Next, we apply image processing algorithms to test the presence of a hurricane eye in the segment. If the eye is found, the coordinate of the eye is recorded along with the time stamp of the segment. If the eye is not found, we examine adjacent segments for the existence of hurricane eye. It is probable that more than one hurricane eye could be found from different segments of the same period. Hence, the above process is repeated till the entire potential area for hurricane birth is exhausted. The subsequent/previous position of each hurricane eye will be searched in the appropriate adjacent segments of the next/previous period to mark the hurricane path. The temporal coherence and spatial coherence of the images are taken into account by our scheme in determining the segments and the associated periods required for analysis.

  15. "Albedo dome": a method for measuring spectral flux-reflectance in a laboratory for media with long optical paths.

    PubMed

    Light, Bonnie; Carns, Regina C; Warren, Stephen G

    2015-06-10

    A method is presented for accurate measurement of spectral flux-reflectance (albedo) in a laboratory, for media with long optical path lengths, such as snow and ice. The approach uses an acrylic hemispheric dome, which, when placed over the surface being studied, serves two functions: (i) it creates an overcast "sky" to illuminate the target surface from all directions within a hemisphere, and (ii) serves as a platform for measuring incident and backscattered spectral radiances, which can be integrated to obtain fluxes. The fluxes are relative measurements and because their ratio is used to determine flux-reflectance, no absolute radiometric calibrations are required. The dome and surface must meet minimum size requirements based on the scattering properties of the surface. This technique is suited for media with long photon path lengths since the backscattered illumination is collected over a large enough area to include photons that reemerge from the domain far from their point of entry because of multiple scattering and small absorption. Comparison between field and laboratory albedo of a portable test surface demonstrates the viability of this method.

  16. Start and Stop Rules for Exploratory Path Analysis.

    ERIC Educational Resources Information Center

    Shipley, Bill

    2002-01-01

    Describes a method for choosing rejection probabilities for the tests of independence that are used in constraint-based algorithms of exploratory path analysis. The method consists of generating a Markov or semi-Markov model from the equivalence class represented by a partial ancestral graph and then testing the d-separation implications. (SLD)

  17. Confidence Probability versus Detection Probability

    SciTech Connect

    Axelrod, M

    2005-08-18

    In a discovery sampling activity the auditor seeks to vet an inventory by measuring (or inspecting) a random sample of items from the inventory. When the auditor finds every sample item in compliance, he must then make a confidence statement about the whole inventory. For example, the auditor might say: ''We believe that this inventory of 100 items contains no more than 5 defectives with 95% confidence.'' Note this is a retrospective statement in that it asserts something about the inventory after the sample was selected and measured. Contrast this to the prospective statement: ''We will detect the existence of more than 5 defective items in this inventory with 95% probability.'' The former uses confidence probability while the latter uses detection probability. For a given sample size, the two probabilities need not be equal, indeed they could differ significantly. Both these probabilities critically depend on the auditor's prior belief about the number of defectives in the inventory and how he defines non-compliance. In other words, the answer strongly depends on how the question is framed.

  18. On Numerical Methods of Solving Some Optimal Path Problems on the Plane

    NASA Astrophysics Data System (ADS)

    Ushakov, V. N.; Matviychuk, A. R.; Malev, A. G.

    Three numerical methods of solution of some time optimal control problems for a system under phase constraints are described in the paper. Two suggested methods are based on transition to the discrete time model, constructing attainability sets and application of the guide construction. The third method is based on the Deikstra algorithm.

  19. Effect-based interpretation of toxicity test data using probability and comparison with alternative methods of analysis

    SciTech Connect

    Gully, J.R.; Baird, R.B.; Markle, P.J.; Bottomley, J.P.

    2000-01-01

    A methodology is described that incorporates the intra- and intertest variability and the biological effect of bioassay data in evaluating the toxicity of single and multiple tests for regulatory decision-making purposes. The single- and multiple-test regulatory decision probabilities were determined from t values (n {minus} 1, one-tailed) derived from the estimated biological effect and the associated standard error at the critical sample concentration. Single-test regulatory decision probabilities below the selected minimum regulatory decision probability identify individual tests as noncompliant. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability is determined by combining the regulatory decision probability of a series of single tests. A multiple-test regulatory decision probability below the multiple-test regulatory decision minimum identifies groups of tests in which the magnitude and persistence of the toxicity is sufficient to be considered noncompliant or to require enforcement action. Regulatory decision probabilities derived from the t distribution were compared with results based on standard and bioequivalence hypothesis tests using single- and multiple-concentration toxicity test data from an actual national pollutant discharge incorporated the precision of the effect estimate into regulatory decisions at a fixed level of effect. Also, probability-based interpretation of toxicity tests provides incentive to laboratories to produce, and permit holders to use, high-quality, precise data, particularly when multiple tests are used in regulatory decisions. These results are contrasted with standard and bioequivalence hypothesis tests in which the intratest precision is a determining factor in setting the biological effect used for regulatory decisions.

  20. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-04-09

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes.

  1. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method

    PubMed Central

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  2. A path-independent method for barrier option pricing in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Rashidi Ranjbar, Hedieh; Seifi, Abbas

    2015-12-01

    This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.

  3. Noncontact common-path Fourier domain optical coherence tomography method for in vitro intraocular lens power measurement.

    PubMed

    Huang, Yong; Zhang, Kang; Kang, Jin U; Calogero, Don; James, Robert H; Ilev, Ilko K

    2011-12-01

    We propose a novel common-path Fourier domain optical coherence tomography (CP-FD-OCT) method for noncontact, accurate, and objective in vitro measurement of the dioptric power of intraocular lenses (IOLs) implants. The CP-FD-OCT method principle of operation is based on simple two-dimensional scanning common-path Fourier domain optical coherence tomography. By reconstructing the anterior and posterior IOL surfaces, the radii of the two surfaces, and thus the IOL dioptric power are determined. The CP-FD-OCT design provides high accuracy of IOL surface reconstruction. The axial position detection accuracy is calibrated at 1.22 μm in balanced saline solution used for simulation of in situ conditions. The lateral sampling rate is controlled by the step size of linear scanning systems. IOL samples with labeled dioptric power in the low-power (5D), mid-power (20D and 22D), and high-power (36D) ranges under in situ conditions are tested. We obtained a mean power of 4.95/20.11/22.09/36.25 D with high levels of repeatability estimated by a standard deviation of 0.10/0.18/0.2/0.58 D and a relative error of 2/0.9/0.9/1.6%, based on five measurements for each IOL respectively. The new CP-FD-OCT method provides an independent source of IOL power measurement data as well as information for evaluating other optical properties of IOLs such as refractive index, central thickness, and aberrations.

  4. Noncontact common-path Fourier domain optical coherence tomography method for in vitro intraocular lens power measurement

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Kang, Jin U.; Calogero, Don; James, Robert H.; Ilev, Ilko K.

    2011-12-01

    We propose a novel common-path Fourier domain optical coherence tomography (CP-FD-OCT) method for noncontact, accurate, and objective in vitro measurement of the dioptric power of intraocular lenses (IOLs) implants. The CP-FD-OCT method principle of operation is based on simple two-dimensional scanning common-path Fourier domain optical coherence tomography. By reconstructing the anterior and posterior IOL surfaces, the radii of the two surfaces, and thus the IOL dioptric power are determined. The CP-FD-OCT design provides high accuracy of IOL surface reconstruction. The axial position detection accuracy is calibrated at 1.22 μm in balanced saline solution used for simulation of in situ conditions. The lateral sampling rate is controlled by the step size of linear scanning systems. IOL samples with labeled dioptric power in the low-power (5D), mid-power (20D and 22D), and high-power (36D) ranges under in situ conditions are tested. We obtained a mean power of 4.95/20.11/22.09/36.25 D with high levels of repeatability estimated by a standard deviation of 0.10/0.18/0.2/0.58 D and a relative error of 2/0.9/0.9/1.6%, based on five measurements for each IOL respectively. The new CP-FD-OCT method provides an independent source of IOL power measurement data as well as information for evaluating other optical properties of IOLs such as refractive index, central thickness, and aberrations.

  5. [A method to calculate the probability of paternity between relatives--a paternity case where the putative father was a deceased granduncle].

    PubMed

    Ishitani, A; Minakata, K; Ito, N; Nagaike, C; Morimura, Y; Hirota, T; Hatake, K

    1996-06-01

    To test paternity in a case where the putative father was a deceased uncle of mother (plaintiff's granduncle), we designed a new method to calculate the probability of paternity likelihood. The putative father's genotypes of red cell antigens, HLA and short tandem repeat (STR) polymorphism were estimated from those of mother and sister of the plaintiff. When the probability was calculated from the frequencies in the unrelated individuals (the standard method), a significant bias might be introduced since the putative father and the plaintiff were likely to have the same alleles come from their common ancestry. Therefore, we designed a new method to calculate the likelihood ratio from the frequencies in the group of mother's uncles estimated from mother's genotypes. The probability (0.9299) calculated with our method was found to be lower than that (0.9992) done with the standard method indicating that the new method could remove the bias introduced from the incest.

  6. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    SciTech Connect

    Zeng Xiancheng; Hu Hao; Hu Xiangqian; Yang Weitao

    2009-04-28

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids 'on-the-fly' QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  7. Comparison of path integral molecular dynamics methods for the infrared absorption spectrum of liquid water

    NASA Astrophysics Data System (ADS)

    Habershon, Scott; Fanourgakis, George S.; Manolopoulos, David E.

    2008-08-01

    The ring polymer molecular dynamics (RPMD) and partially adiabatic centroid molecular dynamics (PA-CMD) methods are compared and contrasted in an application to the infrared absorption spectrum of a recently parametrized flexible, polarizable, Thole-type potential energy model for liquid water. Both methods predict very similar spectra in the low-frequency librational and intramolecular bending region at wavenumbers below 2500 cm-1. However, the RPMD spectrum is contaminated in the high-frequency O-H stretching region by contributions from the internal vibrational modes of the ring polymer. This problem is avoided in the PA-CMD method, which adjusts the elements of the Parrinello-Rahman mass matrix so as to shift the frequencies of these vibrational modes beyond the spectral range of interest. PA-CMD does not require any more computational effort than RPMD and it is clearly the better of the two methods for simulating vibrational spectra.

  8. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  9. Statistics and Probability

    NASA Astrophysics Data System (ADS)

    Laktineh, Imad

    2010-04-01

    This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p.) corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  10. Probability of satellite collision

    NASA Technical Reports Server (NTRS)

    Mccarter, J. W.

    1972-01-01

    A method is presented for computing the probability of a collision between a particular artificial earth satellite and any one of the total population of earth satellites. The collision hazard incurred by the proposed modular Space Station is assessed using the technique presented. The results of a parametric study to determine what type of satellite orbits produce the greatest contribution to the total collision probability are presented. Collision probability for the Space Station is given as a function of Space Station altitude and inclination. Collision probability was also parameterized over miss distance and mission duration.

  11. Analysis of the Perceptions of CPM (Critical Path Method) as a Project Management Tool on Base Level Civil Engineering Projects.

    DTIC Science & Technology

    1986-09-01

    PATH 2/2 METHOD) AS A PROJECT MA (U) AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH SCHOOL OF SYST R 0 REAY UNCLASSIFIED SEP 86 AFIT/GEN/DEM/86S-2i F /O...OF SYST R D REAY UNLSIID SEP 86 AFIT/GEM/DEM/86S-2i F /G 15/5 N EEEEohmhEEEEEE smmhEmhEEEEEmh EEEEEEEEmhhEEE EEEEEEEEEEEmhE EEE10hEEEEEEEEI 1.0 Us 1o15...MANAGEMENT TOOL ON BASE LEVEL CIUIL ENGINEERING PROJECTS THESIS Roderick 0. Reag Captain, USAF AFIT/GEM/DEM/BS-21 DTIC r -c - ,’ r ,u ; . F DEPARTMENT OF THE

  12. Characterizing magnetic resonance signal decay due to Gaussian diffusion: the path integral approach and a convenient computational method

    PubMed Central

    Özarslan, Evren; Westin, Carl-Fredrik; Mareci, Thomas H.

    2016-01-01

    The influence of Gaussian diffusion on the magnetic resonance signal is determined by the apparent diffusion coefficient (ADC) and tensor (ADT) of the diffusing fluid as well as the gradient waveform applied to sensitize the signal to diffusion. Estimations of ADC and ADT from diffusion-weighted acquisitions necessitate computations of, respectively, the b-value and b-matrix associated with the employed pulse sequence. We establish the relationship between these quantities and the gradient waveform by expressing the problem as a path integral and explicitly evaluating it. Further, we show that these important quantities can be conveniently computed for any gradient waveform using a simple algorithm that requires a few lines of code. With this representation, our technique complements the multiple correlation function (MCF) method commonly used to compute the effects of restricted diffusion, and provides a consistent and convenient framework for studies that aim to infer the microstructural features of the specimen. PMID:27182208

  13. A Shortest-Path-Based Method for the Analysis and Prediction of Fruit-Related Genes in Arabidopsis thaliana

    PubMed Central

    Su, Fangchu; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2016-01-01

    Biologically, fruits are defined as seed-bearing reproductive structures in angiosperms that develop from the ovary. The fertilization, development and maturation of fruits are crucial for plant reproduction and are precisely regulated by intrinsic genetic regulatory factors. In this study, we used Arabidopsis thaliana as a model organism and attempted to identify novel genes related to fruit-associated biological processes. Specifically, using validated genes, we applied a shortest-path-based method to identify several novel genes in a large network constructed using the protein-protein interactions observed in Arabidopsis thaliana. The described analyses indicate that several of the discovered genes are associated with fruit fertilization, development and maturation in Arabidopsis thaliana. PMID:27434024

  14. Range maximization method for ramjet powered missiles with flight path constraints

    NASA Astrophysics Data System (ADS)

    Schoettle, U. M.

    1982-03-01

    Mission performance of ramjet powered missiles is strongly infuenced by the trajectory flown. The trajectory optimization problem considered is to obtain the control time histories (i.e., propellant flow rate and angle of attack) which maximize the range of ramjet powered supersonic missiles with preset initial and terminal fight conditions and operational constraints. The approach chosen employs a parametric control model to represent the infinite-dimensional controls by a finite set of parameters. The resulting suboptimal parameter optimization problem is solved by means of nonlinear programming methods. Operational constraints on the state variables are treated by the method of penalty functions. The presented method and numerical results refer to a fixed geometry solid fuel integral rocket ramjet missile for air-to-surface or surface-to-surface missions. The numerical results demonstrate that continuous throttle capabilities increase range performance by about 5 to 11 percent when compared to more conventional throttle control.

  15. Reaction Path Optimization with Holonomic Constraints and Kinetic Energy Potentials

    SciTech Connect

    Brokaw, Jason B.; Haas, Kevin R.; Chu, Jhih-wei

    2009-08-11

    Two methods are developed to enhance the stability, efficiency, and robustness of reaction path optimization using a chain of replicas. First, distances between replicas are kept equal during path optimization via holonomic constraints. Finding a reaction path is, thus, transformed into a constrained optimization problem. This approach avoids force projections for finding minimum energy paths (MEPs), and fast-converging schemes such as quasi-Newton methods can be readily applied. Second, we define a new objective function - the total Hamiltonian - for reaction path optimization, by combining the kinetic energy potential of each replica with its potential energy function. Minimizing the total Hamiltonian of a chain determines a minimum Hamiltonian path (MHP). If the distances between replicas are kept equal and a consistent force constant is used, then the kinetic energy potentials of all replicas have the same value. The MHP in this case is the most probable isokinetic path. Our results indicate that low-temperature kinetic energy potentials (<5 K) can be used to prevent the development of kinks during path optimization and can significantly reduce the required steps of minimization by 2-3 times without causing noticeable differences between a MHP and MEP. These methods are applied to three test cases, the C₇eq-to-Cax isomerization of an alanine dipeptide, the ⁴C₁- to-¹C₄ transition of an α-D-glucopyranose, and the helix-to-sheet transition of a GNNQQNY heptapeptide. By applying the methods developed in this work, convergence of reaction path optimization can be achieved for these complex transitions, involving full atomic details and a large number of replicas (>100). For the case of helix-to-sheet transition, we identify pathways whose energy barriers are consistent with experimental measurements. Further, we develop a method based on the work energy theorem to quantify the accuracy of reaction paths and to determine whether the atoms used to define a

  16. Reaction Path Optimization with Holonomic Constraints and Kinetic Energy Potentials.

    PubMed

    Brokaw, Jason B; Haas, Kevin R; Chu, Jhih-Wei

    2009-08-11

    Two methods are developed to enhance the stability, efficiency, and robustness of reaction path optimization using a chain of replicas. First, distances between replicas are kept equal during path optimization via holonomic constraints. Finding a reaction path is, thus, transformed into a constrained optimization problem. This approach avoids force projections for finding minimum energy paths (MEPs), and fast-converging schemes such as quasi-Newton methods can be readily applied. Second, we define a new objective function - the total Hamiltonian - for reaction path optimization, by combining the kinetic energy potential of each replica with its potential energy function. Minimizing the total Hamiltonian of a chain determines a minimum Hamiltonian path (MHP). If the distances between replicas are kept equal and a consistent force constant is used, then the kinetic energy potentials of all replicas have the same value. The MHP in this case is the most probable isokinetic path. Our results indicate that low-temperature kinetic energy potentials (<5 K) can be used to prevent the development of kinks during path optimization and can significantly reduce the required steps of minimization by 2-3 times without causing noticeable differences between a MHP and MEP. These methods are applied to three test cases, the C7eq-to-Cax isomerization of an alanine dipeptide, the (4)C1-to-(1)C4 transition of an α-d-glucopyranose, and the helix-to-sheet transition of a GNNQQNY heptapeptide. By applying the methods developed in this work, convergence of reaction path optimization can be achieved for these complex transitions, involving full atomic details and a large number of replicas (>100). For the case of helix-to-sheet transition, we identify pathways whose energy barriers are consistent with experimental measurements. Further, we develop a method based on the work energy theorem to quantify the accuracy of reaction paths and to determine whether the atoms used to define a path

  17. RSS-Based Method for Sensor Localization with Unknown Transmit Power and Uncertainty in Path Loss Exponent.

    PubMed

    Huang, Jiyan; Liu, Peng; Lin, Wei; Gui, Guan

    2016-09-08

    The localization of a sensor in wireless sensor networks (WSNs) has now gained considerable attention. Since the transmit power and path loss exponent (PLE) are two critical parameters in the received signal strength (RSS) localization technique, many RSS-based location methods, considering the case that both the transmit power and PLE are unknown, have been proposed in the literature. However, these methods require a search process, and cannot give a closed-form solution to sensor localization. In this paper, a novel RSS localization method with a closed-form solution based on a two-step weighted least squares estimator is proposed for the case with the unknown transmit power and uncertainty in PLE. Furthermore, the complete performance analysis of the proposed method is given in the paper. Both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The relationships between the deterministic CRLB and the proposed stochastic CRLB are presented. The paper also proves that the proposed method can reach the stochastic CRLB.

  18. RSS-Based Method for Sensor Localization with Unknown Transmit Power and Uncertainty in Path Loss Exponent

    PubMed Central

    Huang, Jiyan; Liu, Peng; Lin, Wei; Gui, Guan

    2016-01-01

    The localization of a sensor in wireless sensor networks (WSNs) has now gained considerable attention. Since the transmit power and path loss exponent (PLE) are two critical parameters in the received signal strength (RSS) localization technique, many RSS-based location methods, considering the case that both the transmit power and PLE are unknown, have been proposed in the literature. However, these methods require a search process, and cannot give a closed-form solution to sensor localization. In this paper, a novel RSS localization method with a closed-form solution based on a two-step weighted least squares estimator is proposed for the case with the unknown transmit power and uncertainty in PLE. Furthermore, the complete performance analysis of the proposed method is given in the paper. Both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The relationships between the deterministic CRLB and the proposed stochastic CRLB are presented. The paper also proves that the proposed method can reach the stochastic CRLB. PMID:27618055

  19. Path-integral Monte Carlo method for the local Z2 Berry phase.

    PubMed

    Motoyama, Yuichi; Todo, Synge

    2013-02-01

    We present a loop cluster algorithm Monte Carlo method for calculating the local Z(2) Berry phase of the quantum spin models. The Berry connection, which is given as the inner product of two ground states with different local twist angles, is expressed as a Monte Carlo average on the worldlines with fixed spin configurations at the imaginary-time boundaries. The "complex weight problem" caused by the local twist is solved by adopting the meron cluster algorithm. We present the results of simulation on the antiferromagnetic Heisenberg model on an out-of-phase bond-alternating ladder to demonstrate that our method successfully detects the change in the valence bond pattern at the quantum phase transition point. We also propose that the gauge-fixed local Berry connection can be an effective tool to estimate precisely the quantum critical point.

  20. Gravity-dependent signal path variation in a large VLBI telescope modelled with a combination of surveying methods

    NASA Astrophysics Data System (ADS)

    Sarti, Pierguido; Abbondanza, C.; Vittuari, L.

    2009-11-01

    The very long baseline interferometry (VLBI) antenna in Medicina (Italy) is a 32-m AZ-EL mount that was surveyed several times, adopting an indirect method, for the purpose of estimating the eccentricity vector between the co-located VLBI and Global Positioning System instruments. In order to fulfill this task, targets were located in different parts of the telescope’s structure. Triangulation and trilateration on the targets highlight a consistent amount of deformation that biases the estimate of the instrument’s reference point up to 1 cm, depending on the targets’ locations. Therefore, whenever the estimation of accurate local ties is needed, it is critical to take into consideration the action of gravity on the structure. Furthermore, deformations induced by gravity on VLBI telescopes may modify the length of the path travelled by the incoming radio signal to a non-negligible extent. As a consequence, differently from what it is usually assumed, the relative distance of the feed horn’s phase centre with respect to the elevation axis may vary, depending on the telescope’s pointing elevation. The Medicina telescope’s signal path variation Δ L increases by a magnitude of approximately 2 cm, as the pointing elevation changes from horizon to zenith; it is described by an elevation-dependent second-order polynomial function computed as, according to Clark and Thomsen (Techical report, 100696, NASA, Greenbelt, 1988), a linear combination of three terms: receiver displacement Δ R, primary reflector’s vertex displacement Δ V and focal length variations Δ F. Δ L was investigated with a combination of terrestrial triangulation and trilateration, laser scanning and a finite element model of the antenna. The antenna gain (or auto-focus curve) Δ G is routinely determined through astronomical observations. A surprisingly accurate reproduction of Δ G can be obtained with a combination of Δ V, Δ F and Δ R.

  1. Contribution analysis of bus pass-by noise based on dynamic transfer path method

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Zheng, Sifa; Hao, Peng; Lian, Xiaomin

    2011-10-01

    Bus pass-by noise has become one of the main noise sources which seriously disturb the mental and physical health of urban residents. The key of reducing bus noise is to identify major noise source. In this paper the dynamic transfer characteristic model in the process of bus acceleration is established, which can quantitatively describe the relationship between the sound source or vibration source of the vehicle and the response points outside the vehicle; also a test method has been designed, which can quickly and easily identify the contribution of the bus pass-by noise. Experimental results show that the dynamic transfer characteristic model can identify the main noise source and their contribution during the acceleration, which has significance for the bus noise reduction.

  2. Detection of emission indices of aircraft exhaust compounds by open-path optical methods at airports

    NASA Astrophysics Data System (ADS)

    Schürmann, Gregor; Schäfer, Klaus; Jahn, Carsten; Hoffmann, Herbert; Utzig, Selina

    2005-10-01

    Air pollutant emission rates of aircrafts are determined with test bed measurements. Regulations exist for CO2, NO, NO2, CO concentrations, the content of total unburned hydrocarbons and the smoke number, a measure of soot. These emission indices are listed for each engine in a data base of the International Civil Aviation Organisation (ICAO) for four different Air pollutant emission rates of aircrafts are determined with test bed measurements. Regulations exist for CO2, NO, NO2, CO concentrations, the content of total unburned hydrocarbons and the smoke number, a measure of soot. These emission indices are listed for each engine in a data base of the International Civil Aviation Organisation (ICAO) for four different thrust levels (Idle, approach, cruise and take-off). It is a common procedure to use this data base as a starting point to estimate aircraft emissions at airports and further on to calculate the contribution of airports on local air quality. The comparison of these indices to real in use measurements therefore is a vital task to test the quality of air quality models at airports. Here a method to determine emission indices is used, where concentration measurements of CO2 together with other pollutants in the aircraft plume are needed. During intensive measurement campaigns at Zurich (ZRH) and Paris Charles De Gaulle (CDG) airports, concentrations of CO2, NO, NO2 and CO were measured. The measurement techniques were Fourier-Transform-Infrared (FTIR) spectrometry and Differential Optical Absorption Spectroscopy (DOAS). The big advantage of these methods is that no operations on the airport are influenced during measurement times. Together with detailed observations of taxiway movements, a comparison of emission indices with real in use emissions is possible.

  3. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  4. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  5. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  6. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset.

    PubMed

    Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified.

  7. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  8. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  9. Statistical methods to quantify the effect of mite parasitism on the probability of death in honey bee colonies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Varroa destructor is a mite parasite of European honey bees, Apis mellifera, that weakens the population, can lead to the death of an entire honey bee colony, and is believed to be the parasite with the most economic impact on beekeeping. The purpose of this study was to estimate the probability of ...

  10. Probability theory-based SNP association study method for identifying susceptibility loci and genetic disease models in human case-control data.

    PubMed

    Yuan, Xiguo; Zhang, Junying; Wang, Yue

    2010-12-01

    One of the most challenging points in studying human common complex diseases is to search for both strong and weak susceptibility single-nucleotide polymorphisms (SNPs) and identify forms of genetic disease models. Currently, a number of methods have been proposed for this purpose. Many of them have not been validated through applications into various genome datasets, so their abilities are not clear in real practice. In this paper, we present a novel SNP association study method based on probability theory, called ProbSNP. The method firstly detects SNPs by evaluating their joint probabilities in combining with disease status and selects those with the lowest joint probabilities as susceptibility ones, and then identifies some forms of genetic disease models through testing multiple-locus interactions among the selected SNPs. The joint probabilities of combined SNPs are estimated by establishing Gaussian distribution probability density functions, in which the related parameters (i.e., mean value and standard deviation) are evaluated based on allele and haplotype frequencies. Finally, we test and validate the method using various genome datasets. We find that ProbSNP has shown remarkable success in the applications to both simulated genome data and real genome-wide data.

  11. Methods for assessing movement path recursion with application to African buffalo in South Africa

    USGS Publications Warehouse

    Bar-David, S.; Bar-David, I.; Cross, P.C.; Ryan, S.J.; Knechtel, C.U.; Getz, W.M.

    2009-01-01

    Recent developments of automated methods for monitoring animal movement, e.g., global positioning systems (GPS) technology, yield high-resolution spatiotemporal data. To gain insights into the processes creating movement patterns, we present two new techniques for extracting information from these data on repeated visits to a particular site or patch ("recursions"). Identification of such patches and quantification of recursion pathways, when combined with patch-related ecological data, should contribute to our understanding of the habitat requirements of large herbivores, of factors governing their space-use patterns, and their interactions with the ecosystem. We begin by presenting output from a simple spatial model that simulates movements of large-herbivore groups based on minimal parameters: resource availability and rates of resource recovery after a local depletion. We then present the details of our new techniques of analyses (recursion analysis and circle analysis) and apply them to data generated by our model, as well as two sets of empirical data on movements of African buffalo (Syncerus coffer): the first collected in Klaserie Private Nature Reserve and the second in Kruger National Park, South Africa. Our recursion analyses of model outputs provide us with a basis for inferring aspects of the processes governing the production of buffalo recursion patterns, particularly the potential influence of resource recovery rate. Although the focus of our simulations was a comparison of movement patterns produced by different resource recovery rates, we conclude our paper with a comprehensive discussion of how recursion analyses can be used when appropriate ecological data are available to elucidate various factors influencing movement. Inter alia, these include the various limiting and preferred resources, parasites, and topographical and landscape factors. ?? 2009 by the Ecological Society of America.

  12. A single-beam titration method for the quantification of open-path Fourier transform infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Sung, Lung-Yu; Lu, Chia-Jung

    2014-09-01

    This study introduced a quantitative method that can be used to measure the concentration of analytes directly from a single-beam spectrum of open-path Fourier Transform Infrared Spectroscopy (OP-FTIR). The peak shapes of the analytes in a single-beam spectrum were gradually canceled (i.e., "titrated") by dividing an aliquot of a standard transmittance spectrum with a known concentration, and the sum of the squared differential synthetic spectrum was calculated as an indicator for the end point of this titration. The quantity of a standard transmittance spectrum that is needed to reach the end point can be used to calculate the concentrations of the analytes. A NIST traceable gas standard containing six known compounds was used to compare the quantitative accuracy of both this titration method and that of a classic least square (CLS) using a closed-cell FTIR spectrum. The continuous FTIR analysis of industrial exhausting stack showed that concentration trends were consistent between the CLS and titration methods. The titration method allowed the quantification to be performed without the need of a clean single-beam background spectrum, which was beneficial for the field measurement of OP-FTIR. Persistent constituents of the atmosphere, such as NH3, CH4 and CO, were successfully quantified using the single-beam titration method with OP-FTIR data that is normally inaccurate when using the CLS method due to the lack of a suitable background spectrum. Also, the synthetic spectrum at the titration end point contained virtually no peaks of analytes, but it did contain the remaining information needed to provide an alternative means of obtaining an ideal single-beam background for OP-FTIR.

  13. An analysis of quantum effects on the thermodynamic properties of cryogenic hydrogen using the path integral method

    SciTech Connect

    Nagashima, H.; Tsuda, S.; Tsuboi, N.; Koshi, M.; Hayashi, K. A.; Tokumasu, T.

    2014-04-07

    In this paper, we describe the analysis of the thermodynamic properties of cryogenic hydrogen using classical molecular dynamics (MD) and path integral MD (PIMD) method to understand the effects of the quantum nature of hydrogen molecules. We performed constant NVE MD simulations across a wide density–temperature region to establish an equation of state (EOS). Moreover, the quantum effect on the difference of molecular mechanism of pressure–volume–temperature relationship was addressed. The EOS was derived based on the classical mechanism idea only using the MD simulation results. Simulation results were compared with each MD method and experimental data. As a result, it was confirmed that although the EOS on the basis of classical MD cannot reproduce the experimental data of saturation property of hydrogen in the high-density region, the EOS on the basis of PIMD well reproduces those thermodynamic properties of hydrogen. Moreover, it was clarified that taking quantum effects into account makes the repulsion force larger and the potential well shallower. Because of this mechanism, the intermolecular interaction of hydrogen molecules diminishes and the virial pressure increases.

  14. The lead-lag relationship between stock index and stock index futures: A thermal optimal path method

    NASA Astrophysics Data System (ADS)

    Gong, Chen-Chen; Ji, Shen-Dan; Su, Li-Ling; Li, Sai-Ping; Ren, Fei

    2016-02-01

    The study of lead-lag relationship between stock index and stock index futures is of great importance for its wide application in hedging and portfolio investments. Previous works mainly use conventional methods like Granger causality test, GARCH model and error correction model, and focus on the causality relation between the index and futures in a certain period. By using a non-parametric approach-thermal optimal path (TOP) method, we study the lead-lag relationship between China Securities Index 300 (CSI 300), Hang Seng Index (HSI), Standard and Poor 500 (S&P 500) Index and their associated futures to reveal the variance of their relationship over time. Our finding shows evidence of pronounced futures leadership for well established index futures, namely HSI and S&P 500 index futures, while index of developing market like CSI 300 has pronounced leadership. We offer an explanation based on the measure of an indicator which quantifies the differences between spot and futures prices for the surge of lead-lag function. Our results provide new perspectives for the understanding of the dynamical evolution of lead-lag relationship between stock index and stock index futures, which is valuable for the study of market efficiency and its applications.

  15. A Random Walk on a Circular Path

    ERIC Educational Resources Information Center

    Ching, W.-K.; Lee, M. S.

    2005-01-01

    This short note introduces an interesting random walk on a circular path with cards of numbers. By using high school probability theory, it is proved that under some assumptions on the number of cards, the probability that a walker will return to a fixed position will tend to one as the length of the circular path tends to infinity.

  16. Comparing laser-based open- and closed-path gas analyzers to measure methane fluxes using the eddy covariance method

    USGS Publications Warehouse

    Detto, M.; Verfaillie, J.; Anderson, F.; Xu, L.; Baldocchi, D.

    2011-01-01

    Closed- and open-path methane gas analyzers are used in eddy covariance systems to compare three potential methane emitting ecosystems in the Sacramento-San Joaquin Delta (CA, USA): a rice field, a peatland pasture and a restored wetland. The study points out similarities and differences of the systems in field experiments and data processing. The closed-path system, despite a less intrusive placement with the sonic anemometer, required more care and power. In contrast, the open-path system appears more versatile for a remote and unattended experimental site. Overall, the two systems have comparable minimum detectable limits, but synchronization between wind speed and methane data, air density corrections and spectral losses have different impacts on the computed flux covariances. For the closed-path analyzer, air density effects are less important, but the synchronization and spectral losses may represent a problem when fluxes are small or when an undersized pump is used. For the open-path analyzer air density corrections are greater, due to spectroscopy effects and the classic Webb-Pearman-Leuning correction. Comparison between the 30-min fluxes reveals good agreement in terms of magnitudes between open-path and closed-path flux systems. However, the scatter is large, as consequence of the intensive data processing which both systems require. ?? 2011.

  17. Path Pascal

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Kolstad, R. B.; Holle, D. F.; Miller, T. J.; Krause, P.; Horton, K.; Macke, T.

    1983-01-01

    Path Pascal is high-level experimental programming language based on PASCAL, which incorporates extensions for systems and real-time programming. Pascal is extended to treat real-time concurrent systems.

  18. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  19. Asteroidal collision probabilities

    NASA Astrophysics Data System (ADS)

    Bottke, W. F.; Greenberg, R.

    1993-05-01

    Several past calculations of collision probabilities between pairs of bodies on independent orbits have yielded inconsistent results. We review the methodologies and identify their various problems. Greenberg's (1982) collision probability formalism (now with a corrected symmetry assumption) is equivalent to Wetherill's (1967) approach, except that it includes a way to avoid singularities near apsides. That method shows that the procedure by Namiki and Binzel (1991) was accurate for those cases where singularities did not arise.

  20. Quantification of Methicillin-Resistant Staphylococcus aureus Strains in Marine and Freshwater Samples by the Most-Probable-Number Method

    PubMed Central

    Levin-Edens, Emily; Meschke, John Scott; Roberts, Marilyn C.

    2011-01-01

    Recreational beach environments have been recently identified as a potential reservoir for methicillin-resistant Staphylococcus aureus (MRSA); however, accurate quantification methods are needed for the development of risk assessments. This novel most-probable-number approach for MRSA quantification offers improved sensitivity and specificity by combining broth enrichment with MRSA-specific chromogenic agar. PMID:21441335

  1. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2015-01-28

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P(-6)) to O(P(-12)), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  2. Real-space finite-difference approach for multi-body systems: path-integral renormalization group method and direct energy minimization method.

    PubMed

    Sasaki, Akira; Kojo, Masashi; Hirose, Kikuji; Goto, Hidekazu

    2011-11-02

    The path-integral renormalization group and direct energy minimization method of practical first-principles electronic structure calculations for multi-body systems within the framework of the real-space finite-difference scheme are introduced. These two methods can handle higher dimensional systems with consideration of the correlation effect. Furthermore, they can be easily extended to the multicomponent quantum systems which contain more than two kinds of quantum particles. The key to the present methods is employing linear combinations of nonorthogonal Slater determinants (SDs) as multi-body wavefunctions. As one of the noticeable results, the same accuracy as the variational Monte Carlo method is achieved with a few SDs. This enables us to study the entire ground state consisting of electrons and nuclei without the need to use the Born-Oppenheimer approximation. Recent activities on methodological developments aiming towards practical calculations such as the implementation of auxiliary field for Coulombic interaction, the treatment of the kinetic operator in imaginary-time evolutions, the time-saving double-grid technique for bare-Coulomb atomic potentials and the optimization scheme for minimizing the total-energy functional are also introduced. As test examples, the total energy of the hydrogen molecule, the atomic configuration of the methylene and the electronic structures of two-dimensional quantum dots are calculated, and the accuracy, availability and possibility of the present methods are demonstrated.

  3. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  4. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  5. Transition Path Theory

    NASA Astrophysics Data System (ADS)

    vanden-Eijnden, E.

    The dynamical behavior of many systems arising in physics, chemistry, biology, etc. is dominated by rare but important transition events between long lived states. For over 70 years, transition state theory (TST) has provided the main theoretical framework for the description of these events [17,33,34]. Yet, while TST and evolutions thereof based on the reactive flux formalism [1, 5] (see also [30,31]) give an accurate estimate of the transition rate of a reaction, at least in principle, the theory tells very little in terms of the mechanism of this reaction. Recent advances, such as transition path sampling (TPS) of Bolhuis, Chandler, Dellago, and Geissler [3, 7] or the action method of Elber [15, 16], may seem to go beyond TST in that respect: these techniques allow indeed to sample the ensemble of reactive trajectories, i.e. the trajectories by which the reaction occurs. And yet, the reactive trajectories may again be rather uninformative about the mechanism of the reaction. This may sound paradoxical at first: what more than actual reactive trajectories could one need to understand a reaction? The problem, however, is that the reactive trajectories by themselves give only a very indirect information about the statistical properties of these trajectories. This is similar to why statistical mechanics is not simply a footnote in books about classical mechanics. What is the probability density that a trajectory be at a given location in state-space conditional on it being reactive? What is the probability current of these reactive trajectories? What is their rate of appearance? These are the questions of interest and they are not easy to answer directly from the ensemble of reactive trajectories. The right framework to tackle these questions also goes beyond standard equilibrium statistical mechanics because of the nontrivial bias that the very definition of the reactive trajectories imply - they must be involved in a reaction. The aim of this chapter is to

  6. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  7. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational–rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    SciTech Connect

    Mielke, Steven L. E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2015-01-28

    We present an improved version of our “path-by-path” enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P{sup −6}) to O(P{sup −12}), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational–rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan–Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300–3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  8. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  9. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  10. Evaluation of a most probable number method for the enumeration of Legionella pneumophila from potable and related water samples.

    PubMed

    Sartory, D P; Spies, K; Lange, B; Schneider, S; Langer, B

    2017-04-01

    This study compared the performance of a novel MPN method (Legiolert/Quanti-Tray) with the ISO 11731-2 membrane filtration method for the enumeration of Legionella pneumophila from 100 ml potable water and related samples. Data from a multi-laboratory study analysed according to ISO 17994 showed that Legiolert™/Quanti-Tray® yielded on average higher counts of L. pneumophila. The Legiolert medium had a high specificity of 96·4%. The new method represents a significant improvement in the enumeration of L. pneumophila from drinking water-related samples.

  11. Growing season methane emissions from a permafrost peatland of northeast China: Observations using open-path eddy covariance method

    NASA Astrophysics Data System (ADS)

    Yu, Xueyang; Song, Changchun; Sun, Li; Wang, Xianwei; Shi, Fuxi; Cui, Qian; Tan, Wenwen

    2017-03-01

    The mid-high latitude permafrost peatlands in the Northern Hemisphere is a major natural source of methane (CH4) to the atmosphere. Ecosystem scale CH4 emissions from a typical permafrost peatland in the Great Hing'an Mountains were observed during the growing season of 2014 and 2015 using the open-path eddy covariance method. Relevant environmental factors such as temperature and precipitation were also collected. There was a clear diurnal variation in methane emissions in the second half of each growing season, with significantly higher emission rates in the wet sector of study area. The daily CH4 exchange ranged from 1.8 mg CH4 m-2 d-1 to 40.2 mg CH4 m-2 d-1 in 2014 and ranged from -3.9 to 15.0 mg CH4 m-2 d-1 in 2015. There were no peaks of CH4 fluxes during the spring thawing period. However, large peaks of CH4 emission were found in the second half of both growing seasons. The CH4 emission after Jul 25th accounted for 77.9% of total growing season emission in 2014 and 85.9% in 2015. The total CH4 emission during the growing season of 2014 and 2015 was approximately 1.52 g CH4 m-2 and 0.71 g CH4 m-2, respectively. CH4 fluxes during the growing seasons were significantly correlated with thawing depth (R2 = 0.71, P < 0.01) and soil temperatures (R2 = 0.75, P < 0.01) at 40 cm depth. An empirical equation using these two major variables was modified to estimate growing season CH4 emissions in permafrost peatlands. Our multiyear observations indicate that the time-lagged volume of precipitation during the growing season is a key factor in interpreting locally inter-annual variations in CH4 emissions. Our results suggested that the low temperature in the deep soil layers effectively restricts methane production and emission rates; these conditions may create significant positive feedback under global climate change.

  12. Methods for designing treatments to reduce interior noise of predominant sources and paths in a single engine light aircraft

    NASA Technical Reports Server (NTRS)

    Hayden, Richard E.; Remington, Paul J.; Theobald, Mark A.; Wilby, John F.

    1985-01-01

    The sources and paths by which noise enters the cabin of a small single engine aircraft were determined through a combination of flight and laboratory tests. The primary sources of noise were found to be airborne noise from the propeller and engine casing, airborne noise from the engine exhaust, structureborne noise from the engine/propeller combination and noise associated with air flow over the fuselage. For the propeller, the primary airborne paths were through the firewall, windshield and roof. For the engine, the most important airborne path was through the firewall. Exhaust noise was found to enter the cabin primarily through the panels in the vicinity of the exhaust outlet although exhaust noise entering the cabin through the firewall is a distinct possibility. A number of noise control techniques were tried, including firewall stiffening to reduce engine and propeller airborne noise, to stage isolators and engine mounting spider stiffening to reduce structure-borne noise, and wheel well covers to reduce air flow noise.

  13. Existence of short-time approximations of any polynomial order for the computation of density matrices by path integral methods

    NASA Astrophysics Data System (ADS)

    Predescu, Cristian

    2004-05-01

    In this paper I provide significant mathematical evidence in support of the existence of direct short-time approximations of any polynomial order for the computation of density matrices of physical systems described by arbitrarily smooth and bounded from below potentials. While for Theorem 2, which is “experimental,” I only provide a “physicist’s” proof, I believe the present development is mathematically sound. As a verification, I explicitly construct two short-time approximations to the density matrix having convergence orders 3 and 4, respectively. Furthermore, in Appendix B, I derive the convergence constant for the trapezoidal Trotter path integral technique. The convergence orders and constants are then verified by numerical simulations. While the two short-time approximations constructed are of sure interest to physicists and chemists involved in Monte Carlo path integral simulations, the present paper is also aimed at the mathematical community, who might find the results interesting and worth exploring. I conclude the paper by discussing the implications of the present findings with respect to the solvability of the dynamical sign problem appearing in real-time Feynman path integral simulations.

  14. Methods for designing treatments to reduce interior noise of predominant sources and paths in a single engine light aircraft

    NASA Astrophysics Data System (ADS)

    Hayden, Richard E.; Remington, Paul J.; Theobald, Mark A.; Wilby, John F.

    1985-03-01

    The sources and paths by which noise enters the cabin of a small single engine aircraft were determined through a combination of flight and laboratory tests. The primary sources of noise were found to be airborne noise from the propeller and engine casing, airborne noise from the engine exhaust, structureborne noise from the engine/propeller combination and noise associated with air flow over the fuselage. For the propeller, the primary airborne paths were through the firewall, windshield and roof. For the engine, the most important airborne path was through the firewall. Exhaust noise was found to enter the cabin primarily through the panels in the vicinity of the exhaust outlet although exhaust noise entering the cabin through the firewall is a distinct possibility. A number of noise control techniques were tried, including firewall stiffening to reduce engine and propeller airborne noise, to stage isolators and engine mounting spider stiffening to reduce structure-borne noise, and wheel well covers to reduce air flow noise.

  15. Oscillator strengths and transition probabilities from the Breit–Pauli R-matrix method: Ne IV

    SciTech Connect

    Nahar, Sultana N.

    2014-09-15

    The atomic parameters–oscillator strengths, line strengths, radiative decay rates (A), and lifetimes–for fine structure transitions of electric dipole (E1) type for the astrophysically abundant ion Ne IV are presented. The results include 868 fine structure levels with n≤ 10, l≤ 9, and 1/2≤J≤ 19/2 of even and odd parities, and the corresponding 83,767 E1 transitions. The calculations were carried out using the relativistic Breit–Pauli R-matrix method in the close coupling approximation. The transitions have been identified spectroscopically using an algorithm based on quantum defect analysis and other criteria. The calculated energies agree with the 103 observed and identified energies to within 3% or better for most of the levels. Some larger differences are also noted. The A-values show good to fair agreement with the very limited number of available transitions in the table compiled by NIST, but show very good agreement with the latest published multi-configuration Hartree–Fock calculations. The present transitions should be useful for diagnostics as well as for precise and complete spectral modeling in the soft X-ray to infra-red regions of astrophysical and laboratory plasmas. -- Highlights: •The first application of BPRM method for accurate E1 transitions in Ne IV is reported. •Amount of atomic data (n going up to 10) is complete for most practical applications. •The calculated energies are in very good agreement with most observed levels. •Very good agreement of A-values and lifetimes with other relativistic calculations. •The results should provide precise nebular abundances, chemical evolution etc.

  16. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  17. The influence of NDT-Bobath and PNF methods on the field support and total path length measure foot pressure (COP) in patients after stroke.

    PubMed

    Krukowska, Jolanta; Bugajski, Marcin; Sienkiewicz, Monika; Czernicki, Jan

    In stroke patients, the NDT - (Bobath - Neurodevelopmental Treatment) and PNF (Proprioceptive Neuromuscular Facilitation) methods are used to achieve the main objective of rehabilitation, which aims at the restoration of maximum patient independence in the shortest possible period of time (especially the balance of the body). The aim of the study is to evaluate the effect of the NDT-Bobath and PNF methods on the field support and total path length measure foot pressure (COP) in patients after stroke. The study included 72 patients aged from 20 to 69 years after ischemic stroke with Hemiparesis. The patients were divided into 4 groups by a simple randomization. The criteria for this division were: the body side (right or left) affected by paresis and the applied rehabilitation methods. All the patients were applied the recommended kinesitherapeutic method (randomized), 35 therapy sessions, every day for a period of six weeks. Before initiation of therapy and after 6 weeks was measured the total area of the support and path length (COP (Center Of Pressure) measure foot pressure) using stabilometer platform - alpha. The results were statistically analyzed. After treatment studied traits decreased in all groups. The greatest improvement was obtained in groups with NDT-Bobath therapy. NDT-Bobath method for improving the balance of the body is a more effective method of treatment in comparison with of the PNF method. In stroke patients, the effectiveness of NDT-Bobath method does not depend on hand paresis.

  18. Absolute entropy and free energy of fluids using the hypothetical scanning method. I. Calculation of transition probabilities from local grand canonical partition functions

    NASA Astrophysics Data System (ADS)

    Szarecka, Agnieszka; White, Ronald P.; Meirovitch, Hagai

    2003-12-01

    The hypothetical scanning (HS) method provides the absolute entropy and free energy from a Boltzmann sample generated by Monte Carlo, molecular dynamics or any other exact simulation procedure. Thus far HS has been applied successfully to magnetic and polymer chain models; in this paper and the following one it is extended to fluid systems by treating a Lennard-Jones model of argon. With HS a probability Pi approximating the Boltzmann probability of system configuration i is calculated with a stepwise reconstruction procedure, based on adding atoms gradually layer-by-layer to an initially empty volume, where they are replaced in their positions at i. At each step a transition probability (TP) is obtained from local grand canonical partition functions calculated over a limited space of the still unvisited (future) volume, the larger this space the better the approximation. Pi is the product of the step TPs, where ln Pi is an upper bound of the absolute entropy, which leads to upper and lower bounds for the free energy. We demonstrate that very good results for the entropy and the free energy can be obtained for a wide range of densities of the argon system by calculating TPs that are based on only a very limited future volume.

  19. Calculation for path-domain independent J integral with elasto-viscoplastic consistent tangent operator concept-based boundary element methods

    NASA Astrophysics Data System (ADS)

    Yong, Liu; Qichao, Hong; Lihua, Liang

    1999-05-01

    This paper presents an elasto-viscoplastic consistent tangent operator (CTO) based boundary element formulation, and application for calculation of path-domain independent J integrals (extension of the classical J integrals) in nonlinear crack analysis. When viscoplastic deformation happens, the effective stresses around the crack tip in the nonlinear region is allowed to exceed the loading surface, and the pure plastic theory is not suitable for this situation. The concept of consistency employed in the solution of increment viscoplastic problem, plays a crucial role in preserving the quadratic rate asymptotic convergence of iteractive schemes based on Newton's method. Therefore, this paper investigates the viscoplastic crack problem, and presents an implicit viscoplastic algorithm using the CTO concept in a boundary element framework for path-domain independent J integrals. Applications are presented with two numerical examples for viscoplastic crack problems and J integrals.

  20. Using Logistic Regression and Random Forests multivariate statistical methods for landslide spatial probability assessment in North-Est Sicily, Italy

    NASA Astrophysics Data System (ADS)

    Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele

    2015-04-01

    first phase of the work addressed to identify the spatial relationships between the landslides location and the 13 related factors by using the Frequency Ratio bivariate statistical method. The analysis was then carried out by adopting a multivariate statistical approach, according to the Logistic Regression technique and Random Forests technique that gave best results in terms of AUC. The models were performed and evaluated with different sample sizes and also taking into account the temporal variation of input variables such as burned areas by wildfire. The most significant outcome of this work are: the relevant influence of the sample size on the model results and the strong importance of some environmental factors (e.g. land use and wildfires) for the identification of the depletion zones of extremely rapid shallow landslides.

  1. Finding the biased-shortest path with minimal congestion in networks via linear-prediction of queue length

    NASA Astrophysics Data System (ADS)

    Shen, Yi; Ren, Gang; Liu, Yang

    2016-06-01

    In this paper, we propose a biased-shortest path method with minimal congestion. In the method, we use linear-prediction to estimate the queue length of nodes, and propose a dynamic accepting probability function for nodes to decide whether accept or reject the incoming packets. The dynamic accepting probability function is based on the idea of homogeneous network flow and is developed to enable nodes to coordinate their queue length to avoid congestion. A path strategy incorporated with the linear-prediction of the queue length and the dynamic accepting probability function of nodes is designed to allow packets to be automatically delivered on un-congested paths with short traveling time. Our method has the advantage of low computation cost because the optimal paths are dynamically self-organized by nodes in the delivering process of packets with local traffic information. We compare our method with the existing methods such as the efficient path method (EPS) and the optimal path method (OPS) on the BA scale-free networks and a real example. The numerical computations show that our method performs best for low network load and has minimum run time due to its low computational cost and local routing scheme.

  2. White Noise Path Integrals in Stochastic Neurodynamics

    NASA Astrophysics Data System (ADS)

    Carpio-Bernido, M. Victoria; Bernido, Christopher C.

    2008-06-01

    The white noise path integral approach is used in stochastic modeling of neural activity, where the primary dynamical variables are the relative membrane potentials, while information on transmembrane ionic currents is contained in the drift coefficient. The white noise path integral allows a natural framework and can be evaluated explicitly to yield a closed form for the conditional probability density.

  3. Efficient Probability Sequences

    DTIC Science & Technology

    2014-08-18

    Ungar (2014), to produce a distinct forecasting system. The system consists of the method for eliciting individual subjective forecasts together with...E. Stone, and L. H. Ungar (2014). Two reasons to make aggregated probability forecasts more extreme. Decision Analysis 11 (2), 133–145. Bickel, J. E...Letters 91 (3), 425–429. Mellers, B., L. Ungar , J. Baron, J. Ramos, B. Gurcay, K. Fincher, S. E. Scott, D. Moore, P. Atanasov, S. A. Swift, et al. (2014

  4. Asymptotics of Selberg-like integrals by lattice path counting

    SciTech Connect

    Novaes, Marcel

    2011-04-15

    We obtain explicit expressions for positive integer moments of the probability density of eigenvalues of the Jacobi and Laguerre random matrix ensembles, in the asymptotic regime of large dimension. These densities are closely related to the Selberg and Selberg-like multidimensional integrals. Our method of solution is combinatorial: it consists in the enumeration of certain classes of lattice paths associated to the solution of recurrence relations.

  5. Nonadiabatic transition path sampling

    NASA Astrophysics Data System (ADS)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  6. Local path planning method of the self-propelled model based on reinforcement learning in complex conditions

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Pang, Yongjie; Li, Hongwei; Zhang, Rubo

    2014-09-01

    Conducting hydrodynamic and physical motion simulation tests using a large-scale self-propelled model under actual wave conditions is an important means for researching environmental adaptability of ships. During the navigation test of the self-propelled model, the complex environment including various port facilities, navigation facilities, and the ships nearby must be considered carefully, because in this dense environment the impact of sea waves and winds on the model is particularly significant. In order to improve the security of the self-propelled model, this paper introduces the Q learning based on reinforcement learning combined with chaotic ideas for the model's collision avoidance, in order to improve the reliability of the local path planning. Simulation and sea test results show that this algorithm is a better solution for collision avoidance of the self navigation model under the interference of sea winds and waves with good adaptability.

  7. Regional flood probabilities

    USGS Publications Warehouse

    Troutman, B.M.; Karlinger, M.R.

    2003-01-01

    The T-year annual maximum flood at a site is defined to be that streamflow, that has probability 1/T of being exceeded in any given year, and for a group of sites the corresponding regional flood probability (RFP) is the probability that at least one site will experience a T-year flood in any given year. The RFP depends on the number of sites of interest and on the spatial correlation of flows among the sites. We present a Monte Carlo method for obtaining the RFP and demonstrate that spatial correlation estimates used in this method may be obtained with rank transformed data and therefore that knowledge of the at-site peak flow distribution is not necessary. We examine the extent to which the estimates depend on specification of a parametric form for the spatial correlation function, which is known to be nonstationary for peak flows. It is shown in a simulation study that use of a stationary correlation function to compute RFPs yields satisfactory estimates for certain nonstationary processes. Application of asymptotic extreme value theory is examined, and a methodology for separating channel network and rainfall effects on RFPs is suggested. A case study is presented using peak flow data from the state of Washington. For 193 sites in the Puget Sound region it is estimated that a 100-year flood will occur on the average every 4,5 years.

  8. Absolute entropy and free energy of fluids using the hypothetical scanning method. II. Transition probabilities from canonical Monte Carlo simulations of partial systems

    NASA Astrophysics Data System (ADS)

    White, Ronald P.; Meirovitch, Hagai

    2003-12-01

    A variant of the hypothetical scanning (HS) method for calculating the absolute entropy and free energy of fluids is developed, as applied to systems of Lennard-Jones atoms (liquid argon). As in the preceding paper (Paper I), a probability Pi approximating the Boltzmann probability of system configuration i, is calculated with a reconstruction procedure based on adding the atoms gradually to an initially empty volume, where they are placed in their positions at i; in this process the volume is divided into cubic cells, which are visited layer-by-layer, line-by-line. At each step a transition probability (TP) is calculated and the product of all the TPs leads to Pi. At step k, k-1 cells have already been treated, where among them Nk are occupied by an atom. A canonical metropolis Monte Carlo (MC) simulation is carried out over a portion of the still unvisited (future) volume thus providing an approximate representation of the N-Nk as yet untreated (future) atoms. The TP of target cell k is determined from the number of visits of future atoms to this cell during the simulation. This MC version of HS, called HSMC, is based on a relatively small number of efficiency parameters; their number does not grow and their values are not changed as the number of the treated future atoms is increased (i.e., as the approximation improves); therefore, implementing HSMC for a relatively large number of future atoms (up to 40 in this study) is straightforward. Indeed, excellent results have been obtained for the free energy and the entropy.

  9. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  10. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  11. A low false negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method.

    PubMed

    Song, Dezhen; Xu, Yiliang

    2010-09-01

    We report a new filter to assist the search for rare bird species. Since a rare bird only appears in front of a camera with very low occurrence (e.g., less than ten times per year) for very short duration (e.g., less than a fraction of a second), our algorithm must have a very low false negative rate. We verify the bird body axis information with the known bird flying dynamics from the short video segment. Since a regular extended Kalman filter (EKF) cannot converge due to high measurement error and limited data, we develop a novel probable observation data set (PODS)-based EKF method. The new PODS-EKF searches the measurement error range for all probable observation data that ensures the convergence of the corresponding EKF in short time frame. The algorithm has been extensively tested using both simulated inputs and real video data of four representative bird species. In the physical experiments, our algorithm has been tested on rock pigeons and red-tailed hawks with 119 motion sequences. The area under the ROC curve is 95.0%. During the one-year search of ivory-billed woodpeckers, the system reduces the raw video data of 29.41 TB to only 146.7 MB (reduction rate 99.9995%).

  12. An Alternative Method to Compute the Bit Error Probability of Modulation Schemes Subject to Nakagami-[InlineEquation not available: see fulltext.] Fading

    NASA Astrophysics Data System (ADS)

    Queiroz, Wamberto J. L.; Lopes, Waslon T. A.; Madeiro, Francisco; Alencar, Marcelo S.

    2010-12-01

    This paper presents an alternative method for determining exact expressions for the bit error probability (BEP) of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In this method, the Nakagami-[InlineEquation not available: see fulltext.] fading channel is seen as an additive noise channel whose noise is modeled as the ratio between Gaussian and Nakagami-[InlineEquation not available: see fulltext.] random variables. The method consists of using the cumulative density function of the resulting noise to obtain closed-form expressions for the BEP of modulation schemes subject to Nakagami-[InlineEquation not available: see fulltext.] fading. In particular, the proposed method is used to obtain closed-form expressions for the BEP of [InlineEquation not available: see fulltext.]-ary quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM), [InlineEquation not available: see fulltext.]-ary pulse amplitude modulation ([InlineEquation not available: see fulltext.]-PAM), and rectangular quadrature amplitude modulation ([InlineEquation not available: see fulltext.]-QAM) under Nakagami-[InlineEquation not available: see fulltext.] fading. The main contribution of this paper is to show that this alternative method can be used to reduce the computational complexity for detecting signals in the presence of fading.

  13. Most probable number method combined with nested polymerase chain reaction for detection and enumeration of enterotoxigenic Clostridium perfringens in intestinal contents of cattle, pig and chicken.

    PubMed

    Miwa, N; Nishina, T; Kubo, S; Atsumi, M

    1997-02-01

    The most probable number (MPN) method combined with a nested polymerase chain reaction (nested PCR) for the detection and enumeration of enterotoxigenic Clostridium perfringens in the intestinal contents of cattle, pig and chicken was examined. Ten-fold serial dilutions of samples were added to three tubes of enrichment medium, which were incubated at 37 degrees C for 20-24 hr, and the C. perfringens enterotoxin gene was detected by nested PCR from the enrichment culture without isolating the organism. The results obtained by this method with artificially contaminated intestinal contents were significantly correlated with those obtained by a plate count method. When the method was applied to the detection and enumeration of indigenous enterotoxigenic C. perfringens, the organism was found in two, two and three samples of 10 intestinal contents of cattle, pig and chicken, respectively. Most of the positive samples contained fewer than 10 MPN/g of enterotoxigenic C. perfringens, except one sample of chicken, which contained 1.5 x 10(2) MPN/g. The MPN method combined with nested PCR is easy to perform and may be a useful tool for the detection and enumeration of enterotoxigenic C. perfringens in intestinal contents.

  14. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon

  15. Master equations and the theory of stochastic path integrals

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers–Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman–Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered

  16. Path planning under spatial uncertainty.

    PubMed

    Wiener, Jan M; Lafon, Matthieu; Berthoz, Alain

    2008-04-01

    In this article, we present experiments studying path planning under spatial uncertainties. In the main experiment, the participants' task was to navigate the shortest possible path to find an object hidden in one of four places and to bring it to the final destination. The probability of finding the object (probability matrix) was different for each of the four places and varied between conditions. Givensuch uncertainties about the object's location, planning a single path is not sufficient. Participants had to generate multiple consecutive plans (metaplans)--for example: If the object is found in A, proceed to the destination; if the object is not found, proceed to B; and so on. The optimal solution depends on the specific probability matrix. In each condition, participants learned a different probability matrix and were then asked to report the optimal metaplan. Results demonstrate effective integration of the probabilistic information about the object's location during planning. We present a hierarchical planning scheme that could account for participants' behavior, as well as for systematic errors and differences between conditions.

  17. Estimating Critical Path and Arc Probabilities in Stochastic Activity Networks.

    DTIC Science & Technology

    1983-08-01

    N) is a function of the bounded variation of f in N and lower dimensions. Most importantly, uniform sequences exist for which DK 5 O((log K)N/K) (12...N-IHI+l)-dimensional integral over Hj.’l+l with integrand of bounded variation and that (8), using (16), ( ’d (18), is an approximation to this

  18. Improved methods for Feynman path integral calculations of vibrational-rotational free energies and application to isotopic fractionation of hydrated chloride ions.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2009-04-23

    We present two enhancements to our methods for calculating vibrational-rotational free energies by Feynman path integrals, namely, a sequential sectioning scheme for efficiently generating random free-particle paths and a stratified sampling scheme that uses the energy of the path centroids. These improved methods are used with three interaction potentials to calculate equilibrium constants for the fractionation behavior of Cl(-) hydration in the presence of a gas-phase mixture of H(2)O, D(2)O, and HDO. Ion cyclotron resonance experiments indicate that the equilibrium constant, K(eq), for the reaction Cl(H(2)O)(-) + D(2)O right harpoon over left harpoon Cl(D(2)O)(-) + H(2)O is 0.76, whereas the three theoretical predictions are 0.946, 0.979, and 1.20. Similarly, the experimental K(eq) for the Cl(H(2)O)(-) + HDO right harpoon over left harpoon Cl(HDO)(-) + H(2)O reaction is 0.64 as compared to theoretical values of 0.972, 0.998, and 1.10. Although Cl(H(2)O)(-) has a large degree of anharmonicity, K(eq) values calculated with the harmonic oscillator rigid rotator (HORR) approximation agree with the accurate treatment to within better than 2% in all cases. Results of a variety of electronic structure calculations, including coupled cluster and multireference configuration interaction calculations, with either the HORR approximation or with anharmonicity estimated via second-order vibrational perturbation theory, all agree well with the equilibrium constants obtained from the analytical surfaces.

  19. Experimental validation of Monte Carlo and finite-element methods for the estimation of the optical path length in inhomogeneous tissue

    NASA Astrophysics Data System (ADS)

    Okada, Eiji; Schweiger, Martin; Arridge, Simon R.; Firbank, Michael; Delpy, David T.

    1996-07-01

    To validate models of light propagation in biological tissue, experiments to measure the mean time of flight have been carried out on several solid cylindrical layered phantoms. The optical properties of the inner cylinders of the phantoms were close to those of adult brain white matter, whereas a range of scattering or absorption coefficients was chosen for the outer layer. Experimental results for the mean optical path length have been compared with the predictions of both an exact Monte Carlo (MC) model and a diffusion equation, with two differing boundary conditions implemented in a finite-element method (FEM). The MC and experimental results are in good agreement despite poor statistics for large fiber spacings, whereas good agreement with the FEM prediction requires a careful choice of proper boundary conditions. measurement, Monte Carlo method, finite-element method.

  20. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models.

  1. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  2. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  3. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  4. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  5. Modeling Transport in Fractured Porous Media with the Random-Walk Particle Method: The Transient Activity Range and the Particle-Transfer Probability

    SciTech Connect

    Lehua Pan; G.S. Bodvarsson

    2001-10-22

    Multiscale features of transport processes in fractured porous media make numerical modeling a difficult task, both in conceptualization and computation. Modeling the mass transfer through the fracture-matrix interface is one of the critical issues in the simulation of transport in a fractured porous medium. Because conventional dual-continuum-based numerical methods are unable to capture the transient features of the diffusion depth into the matrix (unless they assume a passive matrix medium), such methods will overestimate the transport of tracers through the fractures, especially for the cases with large fracture spacing, resulting in artificial early breakthroughs. We have developed a new method for calculating the particle-transfer probability that can capture the transient features of diffusion depth into the matrix within the framework of the dual-continuum random-walk particle method (RWPM) by introducing a new concept of activity range of a particle within the matrix. Unlike the multiple-continuum approach, the new dual-continuum RWPM does not require using additional grid blocks to represent the matrix. It does not assume a passive matrix medium and can be applied to the cases where global water flow exists in both continua. The new method has been verified against analytical solutions for transport in the fracture-matrix systems with various fracture spacing. The calculations of the breakthrough curves of radionuclides from a potential repository to the water table in Yucca Mountain demonstrate the effectiveness of the new method for simulating 3-D, mountain-scale transport in a heterogeneous, fractured porous medium under variably saturated conditions.

  6. Improvement of the quantitation method for the tdh+ Vibrio parahaemolyticus in molluscan shellfish based on most-probable- number, immunomagnetic separation, and loop-mediated isothermal amplification

    PubMed Central

    Escalante-Maldonado, Oscar; Kayali, Ahmad Y.; Yamazaki, Wataru; Vuddhakul, Varaporn; Nakaguchi, Yoshitsugu; Nishibuchi, Mitsuaki

    2015-01-01

    Vibrio parahaemolyticus is a marine microorganism that can cause seafood-borne gastroenteritis in humans. The infection can be spread and has become a pandemic through the international trade of contaminated seafood. Strains carrying the tdh gene encoding the thermostable direct hemolysin (TDH) and/or the trh gene encoding the TDH-related hemolysin (TRH) are considered to be pathogenic with the former gene being the most frequently found in clinical strains. However, their distribution frequency in environmental isolates is below 1%. Thus, very sensitive methods are required for detection and quantitation of tdh+ strains in seafood. We previously reported a method to detect and quantify tdh+ V. parahaemolyticus in seafood. This method consists of three components: the most-probable-number (MPN), the immunomagnetic separation (IMS) targeting all established K antigens, and the loop-mediated isothermal amplification (LAMP) targeting the tdh gene. However, this method faces regional issues in tropical zones of the world. Technicians have difficulties in securing dependable reagents in high-temperature climates where we found MPN underestimation in samples having tdh+ strains as well as other microorganisms present at high concentrations. In the present study, we solved the underestimation problem associated with the salt polymyxin broth enrichment for the MPN component and with the immunomagnetic bead-target association for the IMS component. We also improved the supply and maintenance of the dependable reagents by introducing a dried reagent system to the LAMP component. The modified method is specific, sensitive, quick and easy and applicable regardless of the concentrations of tdh+ V. parahaemolyticus. Therefore, we conclude this modified method is useful in world tropical, sub-tropical, and temperate zones. PMID:25914681

  7. Improvement of the quantitation method for the tdh (+) Vibrio parahaemolyticus in molluscan shellfish based on most-probable- number, immunomagnetic separation, and loop-mediated isothermal amplification.

    PubMed

    Escalante-Maldonado, Oscar; Kayali, Ahmad Y; Yamazaki, Wataru; Vuddhakul, Varaporn; Nakaguchi, Yoshitsugu; Nishibuchi, Mitsuaki

    2015-01-01

    Vibrio parahaemolyticus is a marine microorganism that can cause seafood-borne gastroenteritis in humans. The infection can be spread and has become a pandemic through the international trade of contaminated seafood. Strains carrying the tdh gene encoding the thermostable direct hemolysin (TDH) and/or the trh gene encoding the TDH-related hemolysin (TRH) are considered to be pathogenic with the former gene being the most frequently found in clinical strains. However, their distribution frequency in environmental isolates is below 1%. Thus, very sensitive methods are required for detection and quantitation of tdh (+) strains in seafood. We previously reported a method to detect and quantify tdh (+) V. parahaemolyticus in seafood. This method consists of three components: the most-probable-number (MPN), the immunomagnetic separation (IMS) targeting all established K antigens, and the loop-mediated isothermal amplification (LAMP) targeting the tdh gene. However, this method faces regional issues in tropical zones of the world. Technicians have difficulties in securing dependable reagents in high-temperature climates where we found MPN underestimation in samples having tdh (+) strains as well as other microorganisms present at high concentrations. In the present study, we solved the underestimation problem associated with the salt polymyxin broth enrichment for the MPN component and with the immunomagnetic bead-target association for the IMS component. We also improved the supply and maintenance of the dependable reagents by introducing a dried reagent system to the LAMP component. The modified method is specific, sensitive, quick and easy and applicable regardless of the concentrations of tdh (+) V. parahaemolyticus. Therefore, we conclude this modified method is useful in world tropical, sub-tropical, and temperate zones.

  8. Application of a maximum entropy method to estimate the probability density function of nonlinear or chaotic behavior in structural health monitoring data

    NASA Astrophysics Data System (ADS)

    Livingston, Richard A.; Jin, Shuang

    2005-05-01

    Bridges and other civil structures can exhibit nonlinear and/or chaotic behavior under ambient traffic or wind loadings. The probability density function (pdf) of the observed structural responses thus plays an important role for long-term structural health monitoring, LRFR and fatigue life analysis. However, the actual pdf of such structural response data often has a very complicated shape due to its fractal nature. Various conventional methods to approximate it can often lead to biased estimates. This paper presents recent research progress at the Turner-Fairbank Highway Research Center of the FHWA in applying a novel probabilistic scaling scheme for enhanced maximum entropy evaluation to find the most unbiased pdf. The maximum entropy method is applied with a fractal interpolation formulation based on contraction mappings through an iterated function system (IFS). Based on a fractal dimension determined from the entire response data set by an algorithm involving the information dimension, a characteristic uncertainty parameter, called the probabilistic scaling factor, can be introduced. This allows significantly enhanced maximum entropy evaluation through the added inferences about the fine scale fluctuations in the response data. Case studies using the dynamic response data sets collected from a real world bridge (Commodore Barry Bridge, PA) and from the simulation of a classical nonlinear chaotic system (the Lorenz system) are presented in this paper. The results illustrate the advantages of the probabilistic scaling method over conventional approaches for finding the unbiased pdf especially in the critical tail region that contains the larger structural responses.

  9. Review of pipe-break probability assessment methods and data for applicability to the advanced neutron source project for Oak Ridge National Laboratory

    SciTech Connect

    Fullwood, R.R.

    1989-04-01

    The Advanced Neutron Source (ANS) (Difilippo, 1986; Gamble, 1986; West, 1986; Selby, 1987) will be the world's best facility for low energy neutron research. This performance requires the highest flux density of all non-pulsed reactors with concomitant low thermal inertial and fast response to upset conditions. One of the primary concerns is that a flow cessation of the order of a second may result in fuel damage. Such a flow stoppage could be the result of break in the primary piping. This report is a review of methods for assessing pipe break probabilities based on historical operating experience in power reactors, scaling methods, fracture mechanics and fracture growth models. The goal of this work is to develop parametric guidance for the ANS design to make the event highly unlikely. It is also to review and select methods that may be used in an interactive IBM-PC model providing fast and reasonably accurate models to aid the ANS designers in achieving the safety requirements. 80 refs., 7 figs.

  10. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    NASA Astrophysics Data System (ADS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high

  11. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decisionmaker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its content

  12. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decision maker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its

  13. Reaching the Hard-to-Reach: A Probability Sampling Method for Assessing Prevalence of Driving under the Influence after Drinking in Alcohol Outlets

    PubMed Central

    De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite

    2012-01-01

    Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620

  14. Reaching the hard-to-reach: a probability sampling method for assessing prevalence of driving under the influence after drinking in alcohol outlets.

    PubMed

    De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite

    2012-01-01

    Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge--since there is no a priori information on this mobile population--and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations.

  15. Matrix-Specific Method Validation of an Automated Most-Probable-Number System for Use in Measuring Bacteriological Quality of Grade "A" Milk Products.

    PubMed

    Lindemann, Samantha; Kmet, Matthew; Reddy, Ravinder; Uhlig, Steffen

    2016-11-01

    The U.S. Food and Drug Administration (FDA) oversees a long-standing cooperative federal and state milk sanitation program that uses the grade "A" Pasteurized Milk Ordinance standards to maintain the safety of grade "A" milk sold in the United States. The Pasteurized Milk Ordinance requires that grade "A" milk samples be tested using validated total aerobic bacterial and coliform count methods. The objective of this project was to conduct an interlaboratory method validation study to compare performance of a film plate method with an automated most-probable-number method for total aerobic bacterial and coliform counts, using statistical approaches from international data standards. The matrix-specific validation study was administered concurrently with the FDA's annual milk proficiency test to compare method performance in five milk types. Eighteen analysts from nine laboratories analyzed test portions from 12 samples in triplicate. Statistics, including mean bias and matrix standard deviation, were calculated. Sample-specific bias of the alternative method for total aerobic count suggests that there are no large deviations within the population of samples considered. Based on analysis of 648 data points, mean bias of the alternative method across milk samples for total aerobic count was 0.013 log CFU/ml and the confidence interval for mean deviation was -0.066 to 0.009 log CFU/ml. These results indicate that the mean difference between the selected methods is small and not statistically significant. Matrix standard deviation was 0.077 log CFU/ml, showing that there is a low risk for large sample-specific bias based on milk matrix. Mean bias of the alternative method was -0.160 log CFU/ml for coliform count data. The 95% confidence interval was -0.210 to -0.100 log CFU/ml, indicating that mean deviation is significantly different from zero. The standard deviation of the sample-specific bias for coliform data was 0.033 log CFU/ml, indicating no significant effect of

  16. A Microwave Radiometric Method to Obtain the Average Path Profile of Atmospheric Temperature and Humidity Structure Parameters and Its Application to Optical Propagation System Assessment

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.; Vyhnalek, Brian E.

    2015-01-01

    The values of the key atmospheric propagation parameters Ct2, Cq2, and Ctq are highly dependent upon the vertical height within the atmosphere thus making it necessary to specify profiles of these values along the atmospheric propagation path. The remote sensing method suggested and described in this work makes use of a rapidly integrating microwave profiling radiometer to capture profiles of temperature and humidity through the atmosphere. The integration times of currently available profiling radiometers are such that they are approaching the temporal intervals over which one can possibly make meaningful assessments of these key atmospheric parameters. Since these parameters are fundamental to all propagation conditions, they can be used to obtain Cn2 profiles for any frequency, including those for an optical propagation path. In this case the important performance parameters of the prevailing isoplanatic angle and Greenwood frequency can be obtained. The integration times are such that Kolmogorov turbulence theory and the Taylor frozen-flow hypothesis must be transcended. Appropriate modifications to these classical approaches are derived from first principles and an expression for the structure functions are obtained. The theory is then applied to an experimental scenario and shows very good results.

  17. Probability and Relative Frequency

    NASA Astrophysics Data System (ADS)

    Drieschner, Michael

    2016-01-01

    The concept of probability seems to have been inexplicable since its invention in the seventeenth century. In its use in science, probability is closely related with relative frequency. So the task seems to be interpreting that relation. In this paper, we start with predicted relative frequency and show that its structure is the same as that of probability. I propose to call that the `prediction interpretation' of probability. The consequences of that definition are discussed. The "ladder"-structure of the probability calculus is analyzed. The expectation of the relative frequency is shown to be equal to the predicted relative frequency. Probability is shown to be the most general empirically testable prediction.

  18. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed.

  19. Cluster membership probability: polarimetric approach

    NASA Astrophysics Data System (ADS)

    Medhi, Biman J.; Tamura, Motohide

    2013-04-01

    Interstellar polarimetric data of the six open clusters Hogg 15, NGC 6611, NGC 5606, NGC 6231, NGC 5749 and NGC 6250 have been used to estimate the membership probability for the stars within them. For proper-motion member stars, the membership probability estimated using the polarimetric data is in good agreement with the proper-motion cluster membership probability. However, for proper-motion non-member stars, the membership probability estimated by the polarimetric method is in total disagreement with the proper-motion cluster membership probability. The inconsistencies in the determined memberships may be because of the fundamental differences between the two methods of determination: one is based on stellar proper motion in space and the other is based on selective extinction of the stellar output by the asymmetric aligned dust grains present in the interstellar medium. The results and analysis suggest that the scatter of the Stokes vectors q (per cent) and u (per cent) for the proper-motion member stars depends on the interstellar and intracluster differential reddening in the open cluster. It is found that this method could be used to estimate the cluster membership probability if we have additional polarimetric and photometric information for a star to identify it as a probable member/non-member of a particular cluster, such as the maximum wavelength value (λmax), the unit weight error of the fit (σ1), the dispersion in the polarimetric position angles (overline{ɛ }), reddening (E(B - V)) or the differential intracluster reddening (ΔE(B - V)). This method could also be used to estimate the membership probability of known member stars having no membership probability as well as to resolve disagreements about membership among different proper-motion surveys.

  20. Opportunity's Path

    NASA Technical Reports Server (NTRS)

    2004-01-01

    fifteen sols. This will include El Capitan and probably one to two other areas.

    Blue Dot Dates Sol 7 / Jan 31 = Egress & first soil data collected by instruments on the arm Sol 9 / Feb 2 = Second Soil Target Sol 12 / Feb 5 = First Rock Target Sol 16 / Feb 9 = Alpha Waypoint Sol 17 / Feb 10 = Bravo Waypoint Sol 19 or 20 / Feb 12 or 13 = Charlie Waypoint

  1. Simulating biochemical physics with computers: 1. Enzyme catalysis by phosphotriesterase and phosphodiesterase; 2. Integration-free path-integral method for quantum-statistical calculations

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu

    We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.

  2. Entanglement by Path Identity.

    PubMed

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-24

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces-starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  3. Entanglement by Path Identity

    NASA Astrophysics Data System (ADS)

    Krenn, Mario; Hochrainer, Armin; Lahiri, Mayukh; Zeilinger, Anton

    2017-02-01

    Quantum entanglement is one of the most prominent features of quantum mechanics and forms the basis of quantum information technologies. Here we present a novel method for the creation of quantum entanglement in multipartite and high-dimensional systems. The two ingredients are (i) superposition of photon pairs with different origins and (ii) aligning photons such that their paths are identical. We explain the experimentally feasible creation of various classes of multiphoton entanglement encoded in polarization as well as in high-dimensional Hilbert spaces—starting only from nonentangled photon pairs. For two photons, arbitrary high-dimensional entanglement can be created. The idea of generating entanglement by path identity could also apply to quantum entities other than photons. We discovered the technique by analyzing the output of a computer algorithm. This shows that computer designed quantum experiments can be inspirations for new techniques.

  4. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach

    SciTech Connect

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes.

  5. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach.

    PubMed

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes.

  6. What Are Probability Surveys?

    EPA Pesticide Factsheets

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  7. PathMaster

    PubMed Central

    Mattie, Mark E.; Staib, Lawrence; Stratmann, Eric; Tagare, Hemant D.; Duncan, James; Miller, Perry L.

    2000-01-01

    Objective: Currently, when cytopathology images are archived, they are typically stored with a limited text-based description of their content. Such a description inherently fails to quantify the properties of an image and refers to an extremely small fraction of its information content. This paper describes a method for automatically indexing images of individual cells and their associated diagnoses by computationally derived cell descriptors. This methodology may serve to better index data contained in digital image databases, thereby enabling cytologists and pathologists to cross-reference cells of unknown etiology or nature. Design: The indexing method, implemented in a program called PathMaster, uses a series of computer-based feature extraction routines. Descriptors of individual cell characteristics generated by these routines are employed as indexes of cell morphology, texture, color, and spatial orientation. Measurements: The indexing fidelity of the program was tested after populating its database with images of 152 lymphocytes/lymphoma cells captured from lymph node touch preparations stained with hematoxylin and eosin. Images of “unknown” lymphoid cells, previously unprocessed, were then submitted for feature extraction and diagnostic cross-referencing analysis. Results: PathMaster listed the correct diagnosis as its first differential in 94 percent of recognition trials. In the remaining 6 percent of trials, PathMaster listed the correct diagnosis within the first three “differentials.” Conclusion: PathMaster is a pilot cell image indexing program/search engine that creates an indexed reference of images. Use of such a reference may provide assistance in the diagnostic/prognostic process by furnishing a prioritized list of possible identifications for a cell of uncertain etiology. PMID:10887168

  8. Sensitive quantitative detection of Ralstonia solanacearum in soil by the most probable number-polymerase chain reaction (MPN-PCR) method.

    PubMed

    Inoue, Yasuhiro; Nakaho, Kazuhiro

    2014-05-01

    We developed a sensitive quantitative assay for detecting Ralstonia solanacearum in soil by most probable number (MPN) analysis based on bio-PCR results. For development of the detection method, we optimized an elution buffer containing 5 g/L skim milk for extracting bacteria from soil and reducing contamination of polymerase inhibitors in soil extracts. Because R. solanacearum can grow in water without any added nutrients, we used a cultivation buffer in the culture step of the bio-PCR that contained only the buffer and antibiotics to suppress the growth of other soil microorganisms. To quantify the bacterial population in soil, the elution buffer was added to 10 g soil on a dry weight basis so that the combined weight of buffer, soil, and soil-water was 50 g; 5 mL of soil extract was assumed to originate from 1 g of soil. The soil extract was divided into triplicate aliquots each of 5 mL and 500, 50, and 5 μL. Each aliquot was diluted with the cultivation buffer and incubated at 35 °C for about 24 h. After incubation, 5 μL of culture was directly used for nested PCR. The number of aliquots showing positive results was collectively checked against the MPN table. The method could quantify bacterial populations in soil down to 3 cfu/10 g dried soil and was successfully applied to several types of soil. We applied the method for the quantitative detection of R. solanacearum in horticultural soils, which could quantitatively detect small populations (9.3 cfu/g), but the semiselective media were not able to detect the bacteria.

  9. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  10. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  11. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  12. Dynamical Simulation of Probabilities

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1996-01-01

    It has been demonstrated that classical probabilities, and in particular, probabilistic Turing machine, can be simulated by combining chaos and non-Lipschitz dynamics, without utilization of any man-made devices(such as random number generators). Self-orgainizing properties of systems coupling simulated and calculated probabilities and their link to quantum computations are discussed. Special attention was focused upon coupled stochastic processes, defined in terms of conditional probabilities, for which joint probability does not exist. Simulations of quantum probabilities are also discussed.

  13. Energy Dissipation of Rayleigh Waves due to Absorption Along the Path by the Use of Finite Element Method

    DTIC Science & Technology

    1979-07-31

    matrix moduli AP Change in densities CHAPTER V V P wave velocity p Vs S wave velocity Shear quality factor QC Compressional quality factor xii CHAPTER I...medium. Drake (26) also studied the motion of Rayleigh waves at a continental boundary. Waas (117) improved the method of finite element to a general...the viscoelastic part of the medium in this paper , three different sets of data are used: A small enough (.1 percent of the real part) imaginary

  14. Identifying the main paths of information diffusion in online social networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hengmin; Yin, Xicheng; Ma, Jing; Hu, Wei

    2016-06-01

    Recently, an increasing number of researches on relationship strength show that there are some socially active links in online social networks. Furthermore, it is likely that there exist main paths which play the most significant role in the process of information diffusion. Although much of previous work has focused on the pathway of a specific event, there are hardly any scholars that have extracted the main paths. To identify the main paths of online social networks, we proposed a method which measures the weights of links based on historical interaction records. The influence of node based on forwarding amount is quantified and top-ranked nodes are selected as the influential users. The path importance is evaluated by calculating the probability that a message would spread via this path. We applied our method to a real-world network and found interesting insights. Each influential user can access another one via a short main path and the distribution of main paths shows significant community effect.

  15. An adiabatic linearized path integral approach for quantum time-correlation functions II: a cumulant expansion method for improving convergence.

    PubMed

    Causo, Maria Serena; Ciccotti, Giovanni; Bonella, Sara; Vuilleumier, Rodolphe

    2006-08-17

    Linearized mixed quantum-classical simulations are a promising approach for calculating time-correlation functions. At the moment, however, they suffer from some numerical problems that may compromise their efficiency and reliability in applications to realistic condensed-phase systems. In this paper, we present a method that improves upon the convergence properties of the standard algorithm for linearized calculations by implementing a cumulant expansion of the relevant averages. The effectiveness of the new approach is tested by applying it to the challenging computation of the diffusion of an excess electron in a metal-molten salt solution.

  16. A defect size estimation method based on operational speed and path of rolling elements in defective bearings

    NASA Astrophysics Data System (ADS)

    Moazen-ahmadi, Alireza; Howard, Carl Q.

    2016-12-01

    This paper describes the effect of inertia and centrifugal force that act on a rotating rolling element in a defective bearing, on the measured vibration signature. These effects are more pronounced as the speed of components increases. Significant speed-dependency of the characteristic events that are generated at the angular extents of the defect are shown by simulation and experimental measurements. The sources of inaccuracy and the speed-dependency in the existing defect size estimation algorithms are explained. The analyses presented in this study are essential to develop accurate and reliable defect size estimation algorithms. A complete defect size estimation algorithm is proposed that is more accurate and less biased by shaft speed when compared with existing methods.

  17. An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D

    NASA Technical Reports Server (NTRS)

    Thompson, Kyle B.; Gnoffo, Peter A.

    2016-01-01

    An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.

  18. Shortest path and Schramm-Loewner Evolution

    PubMed Central

    Posé, N.; Schrenk, K. J.; Araújo, N. A. M.; Herrmann, H. J.

    2014-01-01

    We numerically show that the statistical properties of the shortest path on critical percolation clusters are consistent with the ones predicted for Schramm-Loewner evolution (SLE) curves for κ = 1.04 ± 0.02. The shortest path results from a global optimization process. To identify it, one needs to explore an entire area. Establishing a relation with SLE permits to generate curves statistically equivalent to the shortest path from a Brownian motion. We numerically analyze the winding angle, the left passage probability, and the driving function of the shortest path and compare them to the distributions predicted for SLE curves with the same fractal dimension. The consistency with SLE opens the possibility of using a solid theoretical framework to describe the shortest path and it raises relevant questions regarding conformal invariance and domain Markov properties, which we also discuss. PMID:24975019

  19. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  20. A comparative study of the centroid and ring-polymer molecular dynamics methods for approximating quantum time correlation functions from path integrals

    NASA Astrophysics Data System (ADS)

    Pérez, Alejandro; Tuckerman, Mark E.; Müser, Martin H.

    2009-05-01

    The problems of ergodicity and internal consistency in the centroid and ring-polymer molecular dynamics methods are addressed in the context of a comparative study of the two methods. Enhanced sampling in ring-polymer molecular dynamics (RPMD) is achieved by first performing an equilibrium path integral calculation and then launching RPMD trajectories from selected, stochastically independent equilibrium configurations. It is shown that this approach converges more rapidly than periodic resampling of velocities from a single long RPMD run. Dynamical quantities obtained from RPMD and centroid molecular dynamics (CMD) are compared to exact results for a variety of model systems. Fully converged results for correlations functions are presented for several one dimensional systems and para-hydrogen near its triple point using an improved sampling technique. Our results indicate that CMD shows very similar performance to RPMD. The quality of each method is further assessed via a new χ2 descriptor constructed by transforming approximate real-time correlation functions from CMD and RPMD trajectories to imaginary time and comparing these to numerically exact imaginary time correlation functions. For para-hydrogen near its triple point, it is found that adiabatic CMD and RPMD both have similar χ2 error.

  1. Probability and radical behaviorism

    PubMed Central

    Espinosa, James M.

    1992-01-01

    The concept of probability appears to be very important in the radical behaviorism of Skinner. Yet, it seems that this probability has not been accurately defined and is still ambiguous. I give a strict, relative frequency interpretation of probability and its applicability to the data from the science of behavior as supplied by cumulative records. Two examples of stochastic processes are given that may model the data from cumulative records that result under conditions of continuous reinforcement and extinction, respectively. PMID:22478114

  2. Introducing a Method for Calculating the Allocation of Attention in a Cognitive “Two-Armed Bandit” Procedure: Probability Matching Gives Way to Maximizing

    PubMed Central

    Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor

    2016-01-01

    We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the

  3. PROBABILITY AND STATISTICS.

    DTIC Science & Technology

    STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION

  4. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  5. Probability state modeling theory.

    PubMed

    Bagwell, C Bruce; Hunsberger, Benjamin C; Herbert, Donald J; Munson, Mark E; Hill, Beth L; Bray, Chris M; Preffer, Frederic I

    2015-07-01

    As the technology of cytometry matures, there is mounting pressure to address two major issues with data analyses. The first issue is to develop new analysis methods for high-dimensional data that can directly reveal and quantify important characteristics associated with complex cellular biology. The other issue is to replace subjective and inaccurate gating with automated methods that objectively define subpopulations and account for population overlap due to measurement uncertainty. Probability state modeling (PSM) is a technique that addresses both of these issues. The theory and important algorithms associated with PSM are presented along with simple examples and general strategies for autonomous analyses. PSM is leveraged to better understand B-cell ontogeny in bone marrow in a companion Cytometry Part B manuscript. Three short relevant videos are available in the online supporting information for both of these papers. PSM avoids the dimensionality barrier normally associated with high-dimensionality modeling by using broadened quantile functions instead of frequency functions to represent the modulation of cellular epitopes as cells differentiate. Since modeling programs ultimately minimize or maximize one or more objective functions, they are particularly amenable to automation and, therefore, represent a viable alternative to subjective and inaccurate gating approaches.

  6. Knowledge typology for imprecise probabilities.

    SciTech Connect

    Wilson, G. D.; Zucker, L. J.

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  7. General polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method applied to small atoms, ions, and molecules at finite temperatures

    NASA Astrophysics Data System (ADS)

    Tiihonen, Juha; Kylänpää, Ilkka; Rantala, Tapio T.

    2016-09-01

    The nonlinear optical properties of matter have a broad relevance and many methods have been invented to compute them from first principles. However, the effects of electronic correlation, finite temperature, and breakdown of the Born-Oppenheimer approximation have turned out to be challenging and tedious to model. Here we propose a straightforward approach and derive general field-free polarizability and hyperpolarizability estimators for the path-integral Monte Carlo method. The estimators are applied to small atoms, ions, and molecules with one or two electrons. With the adiabatic, i.e., Born-Oppenheimer, approximation we obtain accurate tensorial ground state polarizabilities, while the nonadiabatic simulation adds in considerable rovibrational effects and thermal coupling. In both cases, the 0 K, or ground-state, limit is in excellent agreement with the literature. Furthermore, we report here the internal dipole moment of PsH molecule, the temperature dependence of the polarizabilities of H-, and the average dipole polarizabilities and the ground-state hyperpolarizabilities of HeH+ and H 3 + .

  8. Probability and Statistics.

    ERIC Educational Resources Information Center

    Barnes, Bernis, Ed.; And Others

    This teacher's guide to probability and statistics contains three major sections. The first section on elementary combinatorial principles includes activities, student problems, and suggested teaching procedures for the multiplication principle, permutations, and combinations. Section two develops an intuitive approach to probability through…

  9. Teachers' Understandings of Probability

    ERIC Educational Resources Information Center

    Liu, Yan; Thompson, Patrick

    2007-01-01

    Probability is an important idea with a remarkably wide range of applications. However, psychological and instructional studies conducted in the last two decades have consistently documented poor understanding of probability among different populations across different settings. The purpose of this study is to develop a theoretical framework for…

  10. Human-machine teaming for effective estimation and path planning

    NASA Astrophysics Data System (ADS)

    McCourt, Michael J.; Mehta, Siddhartha S.; Doucette, Emily A.; Curtis, J. Willard

    2016-05-01

    While traditional sensors provide accurate measurements of quantifiable information, humans provide better qualitative information and holistic assessments. Sensor fusion approaches that team humans and machines can take advantage of the benefits provided by each while mitigating the shortcomings. These two sensor sources can be fused together using Bayesian fusion, which assumes that there is a method of generating a probabilistic representation of the sensor measurement. This general framework of fusing estimates can also be applied to joint human-machine decision making. In the simple case, binary decisions can be fused by using a probability of taking an action versus inaction from each decision-making source. These are fused together to arrive at a final probability of taking an action, which would be taken if above a specified threshold. In the case of path planning, rather than binary decisions being fused, complex decisions can be fused by allowing the human and machine to interact with each other. For example, the human can draw a suggested path while the machine planning algorithm can refine it to avoid obstacles and remain dynamically feasible. Similarly, the human can revise a suggested path to achieve secondary goals not encoded in the algorithm such as avoiding dangerous areas in the environment.

  11. Tackling higher derivative ghosts with the Euclidean path integral

    SciTech Connect

    Fontanini, Michele; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in the most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.

  12. Simulation of tunneling in enzyme catalysis by combining a biased propagation approach and the quantum classical path method: application to lipoxygenase.

    PubMed

    Mavri, Janez; Liu, Hanbin; Olsson, Mats H M; Warshel, Arieh

    2008-05-15

    The ability of using wave function propagation approaches to simulate isotope effects in enzymes is explored, focusing on the large H/D kinetic isotope effect of soybean lipoxygenase-1 (SLO-1). The H/D kinetic isotope effect (KIE) is calculated as the ratio of the rate constants for hydrogen and deuterium transfer. The rate constants are calculated from the time course of the H and D nuclear wave functions. The propagations are done using one-dimensional proton potentials generated as sections from the full multidimensional surface of the reacting system in the protein. The sections are obtained during a classical empirical valence bond (EVB) molecular dynamics simulation of SLO-1. Since the propagations require an extremely long time for treating realistic activation barriers, it is essential to use an effective biasing approach. Thus, we develop here an approach that uses the classical quantum path (QCP) method to evaluate the quantum free energy change associated with the biasing potential. This approach provides an interesting alternative to full QCP simulations and to other current approaches for simulating isotope effects in proteins. In particular, this approach can be used to evaluate the quantum mechanical transmission factor or other dynamical effects, while still obtaining reliable quantized activation free energies due to the QCP correction.

  13. Path integral density matrix dynamics: A method for calculating time-dependent properties in thermal adiabatic and non-adiabatic systems

    NASA Astrophysics Data System (ADS)

    Habershon, Scott

    2013-09-01

    We introduce a new approach for calculating quantum time-correlation functions and time-dependent expectation values in many-body thermal systems; both electronically adiabatic and non-adiabatic cases can be treated. Our approach uses a path integral simulation to sample an initial thermal density matrix; subsequent evolution of this density matrix is equivalent to solution of the time-dependent Schrödinger equation, which we perform using a linear expansion of Gaussian wavepacket basis functions which evolve according to simple classical-like trajectories. Overall, this methodology represents a formally exact approach for calculating time-dependent quantum properties; by introducing approximations into both the imaginary-time and real-time propagations, this approach can be adapted for complex many-particle systems interacting through arbitrary potentials. We demonstrate this method for the spin Boson model, where we find good agreement with numerically exact calculations. We also discuss future directions of improvement for our approach with a view to improving accuracy and efficiency.

  14. Guide star probabilities

    NASA Technical Reports Server (NTRS)

    Soneira, R. M.; Bahcall, J. N.

    1981-01-01

    Probabilities are calculated for acquiring suitable guide stars (GS) with the fine guidance system (FGS) of the space telescope. A number of the considerations and techniques described are also relevant for other space astronomy missions. The constraints of the FGS are reviewed. The available data on bright star densities are summarized and a previous error in the literature is corrected. Separate analytic and Monte Carlo calculations of the probabilities are described. A simulation of space telescope pointing is carried out using the Weistrop north galactic pole catalog of bright stars. Sufficient information is presented so that the probabilities of acquisition can be estimated as a function of position in the sky. The probability of acquiring suitable guide stars is greatly increased if the FGS can allow an appreciable difference between the (bright) primary GS limiting magnitude and the (fainter) secondary GS limiting magnitude.

  15. Quantum computing and probability.

    PubMed

    Ferry, David K

    2009-11-25

    Over the past two decades, quantum computing has become a popular and promising approach to trying to solve computationally difficult problems. Missing in many descriptions of quantum computing is just how probability enters into the process. Here, we discuss some simple examples of how uncertainty and probability enter, and how this and the ideas of quantum computing challenge our interpretations of quantum mechanics. It is found that this uncertainty can lead to intrinsic decoherence, and this raises challenges for error correction.

  16. Rationalizing Hybrid Earthquake Probabilities

    NASA Astrophysics Data System (ADS)

    Gomberg, J.; Reasenberg, P.; Beeler, N.; Cocco, M.; Belardinelli, M.

    2003-12-01

    An approach to including stress transfer and frictional effects in estimates of the probability of failure of a single fault affected by a nearby earthquake has been suggested in Stein et al. (1997). This `hybrid' approach combines conditional probabilities, which depend on the time elapsed since the last earthquake on the affected fault, with Poissonian probabilities that account for friction and depend only on the time since the perturbing earthquake. The latter are based on the seismicity rate change model developed by Dieterich (1994) to explain the temporal behavior of aftershock sequences in terms of rate-state frictional processes. The model assumes an infinite population of nucleation sites that are near failure at the time of the perturbing earthquake. In the hybrid approach, assuming the Dieterich model can lead to significant transient increases in failure probability. We explore some of the implications of applying the Dieterich model to a single fault and its impact on the hybrid probabilities. We present two interpretations that we believe can rationalize the use of the hybrid approach. In the first, a statistical distribution representing uncertainties in elapsed and/or mean recurrence time on the fault serves as a proxy for Dieterich's population of nucleation sites. In the second, we imagine a population of nucleation patches distributed over the fault with a distribution of maturities. In both cases we find that the probability depends on the time since the last earthquake. In particular, the size of the transient probability increase may only be significant for faults already close to failure. Neglecting the maturity of a fault may lead to overestimated rate and probability increases.

  17. Source and path corrections, feature selection, and outlier detection applied to regional event discrimination in China

    SciTech Connect

    Hartse, H.E.; Taylor, S.R.; Phillips, W.S.; Velasco, A.A.

    1999-03-01

    The authors are investigating techniques to improve regional discrimination performance in uncalibrated regions. These include combined source and path corrections, spatial path corrections, path-specific waveguide corrections to construct frequency-dependent amplitude corrections that remove attenuation, corner frequency scaling, and source region/path effects (such as blockages). The spatial method and the waveguide method address corrections for specific source regions and along specific paths. After applying the above corrections to phase amplitudes, the authors form amplitude ratios and use a combination of feature selection and outlier detection to choose the best-performing combination of discriminants. Feature selection remains an important issue. Most stations have an inadequate population of nuclear explosions on which to base discriminant selection. Additionally, mining explosions are probably not good surrogates for nuclear explosions. The authors are exploring the feasibility of sampling the source and path corrected amplitudes for each phase as a function of frequency in an outlier detection framework. In this case, the source identification capability will be based on the inability of the earthquake source model to fit data from explosion sources.

  18. Probabilities in implicit learning.

    PubMed

    Tseng, Philip; Hsu, Tzu-Yu; Tzeng, Ovid J L; Hung, Daisy L; Juan, Chi-Hung

    2011-01-01

    The visual system possesses a remarkable ability in learning regularities from the environment. In the case of contextual cuing, predictive visual contexts such as spatial configurations are implicitly learned, retained, and used to facilitate visual search-all without one's subjective awareness and conscious effort. Here we investigated whether implicit learning and its facilitatory effects are sensitive to the statistical property of such implicit knowledge. In other words, are highly probable events learned better than less probable ones even when such learning is implicit? We systematically varied the frequencies of context repetition to alter the degrees of learning. Our results showed that search efficiency increased consistently as contextual probabilities increased. Thus, the visual contexts, along with their probability of occurrences, were both picked up by the visual system. Furthermore, even when the total number of exposures was held constant between each probability, the highest probability still enjoyed a greater cuing effect, suggesting that the temporal aspect of implicit learning is also an important factor to consider in addition to the effect of mere frequency. Together, these findings suggest that implicit learning, although bypassing observers' conscious encoding and retrieval effort, behaves much like explicit learning in the sense that its facilitatory effect also varies as a function of its associative strengths.

  19. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  20. Detonation probabilities of high explosives

    SciTech Connect

    Eisenhawer, S.W.; Bott, T.F.; Bement, T.R.

    1995-07-01

    The probability of a high explosive violent reaction (HEVR) following various events is an extremely important aspect of estimating accident-sequence frequency for nuclear weapons dismantlement. In this paper, we describe the development of response curves for insults to PBX 9404, a conventional high-performance explosive used in US weapons. The insults during dismantlement include drops of high explosive (HE), strikes of tools and components on HE, and abrasion of the explosive. In the case of drops, we combine available test data on HEVRs and the results of flooring certification tests to estimate the HEVR probability. For other insults, it was necessary to use expert opinion. We describe the expert solicitation process and the methods used to consolidate the responses. The HEVR probabilities obtained from both approaches are compared.

  1. Stationary and Nontationary Response Probability Density Function of a Beam under Poisson White Noise

    NASA Astrophysics Data System (ADS)

    Vasta, M.; Di Paola, M.

    In this paper an approximate explicit probability density function for the analysis of external oscillations of a linear and geometric nonlinear simply supported beam driven by random pulses is proposed. The adopted impulsive loading model is the Poisson White Noise , that is a process having Dirac's delta occurrences with random intensity distributed in time according to Poisson's law. The response probability density function can be obtained solving the related Kolmogorov-Feller (KF) integro-differential equation. An approximated solution, using path integral method, is derived transforming the KF equation to a first order partial differential equation. The method of characteristic is then applied to obtain an explicit solution. Different levels of approximation, depending on the physical assumption on the transition probability density function, are found and the solution for the response density is obtained as series expansion using convolution integrals.

  2. Normal probability plots with confidence.

    PubMed

    Chantarangsi, Wanpen; Liu, Wei; Bretz, Frank; Kiatsupaibul, Seksan; Hayter, Anthony J; Wan, Fang

    2015-01-01

    Normal probability plots are widely used as a statistical tool for assessing whether an observed simple random sample is drawn from a normally distributed population. The users, however, have to judge subjectively, if no objective rule is provided, whether the plotted points fall close to a straight line. In this paper, we focus on how a normal probability plot can be augmented by intervals for all the points so that, if the population distribution is normal, then all the points should fall into the corresponding intervals simultaneously with probability 1-α. These simultaneous 1-α probability intervals provide therefore an objective mean to judge whether the plotted points fall close to the straight line: the plotted points fall close to the straight line if and only if all the points fall into the corresponding intervals. The powers of several normal probability plot based (graphical) tests and the most popular nongraphical Anderson-Darling and Shapiro-Wilk tests are compared by simulation. Based on this comparison, recommendations are given in Section 3 on which graphical tests should be used in what circumstances. An example is provided to illustrate the methods.

  3. The perception of probability.

    PubMed

    Gallistel, C R; Krishan, Monika; Liu, Ye; Miller, Reilly; Latham, Peter E

    2014-01-01

    We present a computational model to explain the results from experiments in which subjects estimate the hidden probability parameter of a stepwise nonstationary Bernoulli process outcome by outcome. The model captures the following results qualitatively and quantitatively, with only 2 free parameters: (a) Subjects do not update their estimate after each outcome; they step from one estimate to another at irregular intervals. (b) The joint distribution of step widths and heights cannot be explained on the assumption that a threshold amount of change must be exceeded in order for them to indicate a change in their perception. (c) The mapping of observed probability to the median perceived probability is the identity function over the full range of probabilities. (d) Precision (how close estimates are to the best possible estimate) is good and constant over the full range. (e) Subjects quickly detect substantial changes in the hidden probability parameter. (f) The perceived probability sometimes changes dramatically from one observation to the next. (g) Subjects sometimes have second thoughts about a previous change perception, after observing further outcomes. (h) The frequency with which they perceive changes moves in the direction of the true frequency over sessions. (Explaining this finding requires 2 additional parametric assumptions.) The model treats the perception of the current probability as a by-product of the construction of a compact encoding of the experienced sequence in terms of its change points. It illustrates the why and the how of intermittent Bayesian belief updating and retrospective revision in simple perception. It suggests a reinterpretation of findings in the recent literature on the neurobiology of decision making.

  4. Identifying decohering paths in closed quantum systems

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1990-01-01

    A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

  5. Pathfinder: Visual Analysis of Paths in Graphs

    PubMed Central

    Partl, C.; Gratzl, S.; Streit, M.; Wassermann, A. M.; Pfister, H.; Schmalstieg, D.; Lex, A.

    2016-01-01

    The analysis of paths in graphs is highly relevant in many domains. Typically, path-related tasks are performed in node-link layouts. Unfortunately, graph layouts often do not scale to the size of many real world networks. Also, many networks are multivariate, i.e., contain rich attribute sets associated with the nodes and edges. These attributes are often critical in judging paths, but directly visualizing attributes in a graph layout exacerbates the scalability problem. In this paper, we present visual analysis solutions dedicated to path-related tasks in large and highly multivariate graphs. We show that by focusing on paths, we can address the scalability problem of multivariate graph visualization, equipping analysts with a powerful tool to explore large graphs. We introduce Pathfinder (Figure 1), a technique that provides visual methods to query paths, while considering various constraints. The resulting set of paths is visualized in both a ranked list and as a node-link diagram. For the paths in the list, we display rich attribute data associated with nodes and edges, and the node-link diagram provides topological context. The paths can be ranked based on topological properties, such as path length or average node degree, and scores derived from attribute data. Pathfinder is designed to scale to graphs with tens of thousands of nodes and edges by employing strategies such as incremental query results. We demonstrate Pathfinder's fitness for use in scenarios with data from a coauthor network and biological pathways. PMID:27942090

  6. Estimating Transitional Probabilities with Cross-Sectional Data to Assess Smoking Behavior Progression: A Validation Analysis

    PubMed Central

    Chen, Xinguang; Lin, Feng

    2013-01-01

    Background and objective New analytical tools are needed to advance tobacco research, tobacco control planning and tobacco use prevention practice. In this study, we validated a method to extract information from cross-sectional survey for quantifying population dynamics of adolescent smoking behavior progression. Methods With a 3-stage 7-path model, probabilities of smoking behavior progression were estimated employing the Probabilistic Discrete Event System (PDES) method and the cross-sectional data from 1997-2006 National Survey on Drug Use and Health (NSDUH). Validity of the PDES method was assessed using data from the National Longitudinal Survey of Youth 1997 and trends in smoking transition covering the period during which funding for tobacco control was cut substantively in 2003 in the United States. Results Probabilities for all seven smoking progression paths were successfully estimated with the PDES method and the NSDUH data. The absolute difference in the estimated probabilities between the two approaches varied from 0.002 to 0.076 (p>0.05 for all) and were highly correlated with each other (R2=0.998, p<0.01). Changes in the estimated transitional probabilities across the 1997-2006 reflected the 2003 funding cut for tobacco control. Conclusions The PDES method has validity in quantifying population dynamics of smoking behavior progression with cross-sectional survey data. The estimated transitional probabilities add new evidence supporting more advanced tobacco research, tobacco control planning and tobacco use prevention practice. This method can be easily extended to study other health risk behaviors. PMID:25279247

  7. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  8. Two Paths Diverged: Exploring Trajectories, Protocols, and Dynamic Phases

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd Robert

    Using tools of statistical mechanics, it is routine to average over the distribution of microscopic configurations to obtain equilibrium free energies. These free energies teach us about the most likely molecular arrangements and the probability of observing deviations from the norm. Frequently, it is necessary to interrogate the probability not just of static arrangements, but of dynamical events, in which case analogous statistical mechanical tools may be applied to study the distribution of molecular trajectories. Numerical study of these trajectory spaces requires algorithms which efficiently sample the possible trajectories. We study in detail one such Monte Carlo algorithm, transition path sampling, and use a non- equilibrium statistical mechanical perspective to illuminate why the algorithm cannot easily be adapted to study some problems involving long-timescale dynamics. Algorithmically generating highly-correlated trajectories, a necessity for transition path sampling, grows exponentially more challenging for longer trajectories unless the dynamics is strongly-guided by the "noise history", the sequence of random numbers representing the noise terms in the stochastic dynamics. Langevin dynamics of Weeks-Chandler-Andersen (WCA) particles in two dimensions lacks this strong noise guidance, so it is challenging to use transition path sampling to study rare dynamical events in long trajectories of WCA particles. The spin flip dynamics of a two-dimensional Ising model, on the other hand, can be guided by the noise history to achieve efficient path sampling. For systems that can be efficiently sampled with path sampling, we show that it is possible to simultaneously sample both the paths and the (potentially vast) space of non-equilibrium protocols to efficiently learn how rate constants vary with protocols and to identify low-dissipation protocols. When high-dimensional molecular dynamics can be coarse-grained and represented by a simplified dynamics on a low

  9. Multiple paths in complex tasks

    NASA Technical Reports Server (NTRS)

    Galanter, Eugene; Wiegand, Thomas; Mark, Gloria

    1987-01-01

    The relationship between utility judgments of subtask paths and the utility of the task as a whole was examined. The convergent validation procedure is based on the assumption that measurements of the same quantity done with different methods should covary. The utility measures of the subtasks were obtained during the performance of an aircraft flight controller navigation task. Analyses helped decide among various models of subtask utility combination, whether the utility ratings of subtask paths predict the whole tasks utility rating, and indirectly, whether judgmental models need to include the equivalent of cognitive noise.

  10. Career Path of School Superintendents.

    ERIC Educational Resources Information Center

    Mertz, Norma T.; McNeely, Sonja R.

    This study of the career paths of 147 Tennessee school superintendents sought to determine to what extent coaching and principalships are routes to that office. The majority of respondents were white males; only one was black, and 10 were female. The data were analyzed by group, race, sex, years in office, and method of selection (elected or…

  11. New emission factors for Australian vegetation fires measured using open-path Fourier transform infrared spectroscopy - Part 1: Methods and Australian temperate forest fires

    NASA Astrophysics Data System (ADS)

    Paton-Walsh, C.; Smith, T. E. L.; Young, E. L.; Griffith, D. W. T.; Guérette, É.-A.

    2014-10-01

    Biomass burning releases trace gases and aerosol particles that significantly affect the composition and chemistry of the atmosphere. Australia contributes approximately 8% of gross global carbon emissions from biomass burning, yet there are few previous measurements of emissions from Australian forest fires available in the literature. This paper describes the results of field measurements of trace gases emitted during hazard reduction burns in Australian temperate forests using open-path Fourier transform infrared spectroscopy. In a companion paper, similar techniques are used to characterise the emissions from hazard reduction burns in the savanna regions of the Northern Territory. Details of the experimental methods are explained, including both the measurement set-up and the analysis techniques employed. The advantages and disadvantages of different ways to estimate whole-fire emission factors are discussed and a measurement uncertainty budget is developed. Emission factors for Australian temperate forest fires are measured locally for the first time for many trace gases. Where ecosystem-relevant data are required, we recommend the following emission factors for Australian temperate forest fires (in grams of gas emitted per kilogram of dry fuel burned) which are our mean measured values: 1620 ± 160 g kg-1 of carbon dioxide; 120 ± 20 g kg-1 of carbon monoxide; 3.6 ± 1.1 g kg-1 of methane; 1.3 ± 0.3 g kg-1 of ethylene; 1.7 ± 0.4 g kg-1 of formaldehyde; 2.4 ± 1.2 g kg-1 of methanol; 3.8 ± 1.3 g kg-1 of acetic acid; 0.4 ± 0.2 g kg-1 of formic acid; 1.6 ± 0.6 g kg-1 of ammonia; 0.15 ± 0.09 g kg-1 of nitrous oxide and 0.5 ± 0.2 g kg-1 of ethane.

  12. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  13. Estimating tail probabilities

    SciTech Connect

    Carr, D.B.; Tolley, H.D.

    1982-12-01

    This paper investigates procedures for univariate nonparametric estimation of tail probabilities. Extrapolated values for tail probabilities beyond the data are also obtained based on the shape of the density in the tail. Several estimators which use exponential weighting are described. These are compared in a Monte Carlo study to nonweighted estimators, to the empirical cdf, to an integrated kernel, to a Fourier series estimate, to a penalized likelihood estimate and a maximum likelihood estimate. Selected weighted estimators are shown to compare favorably to many of these standard estimators for the sampling distributions investigated.

  14. Pulled Motzkin paths

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  15. Path Integrals and Hamiltonians

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.

    2014-03-01

    1. Synopsis; Part I. Fundamental Principles: 2. The mathematical structure of quantum mechanics; 3. Operators; 4. The Feynman path integral; 5. Hamiltonian mechanics; 6. Path integral quantization; Part II. Stochastic Processes: 7. Stochastic systems; Part III. Discrete Degrees of Freedom: 8. Ising model; 9. Ising model: magnetic field; 10. Fermions; Part IV. Quadratic Path Integrals: 11. Simple harmonic oscillators; 12. Gaussian path integrals; Part V. Action with Acceleration: 13. Acceleration Lagrangian; 14. Pseudo-Hermitian Euclidean Hamiltonian; 15. Non-Hermitian Hamiltonian: Jordan blocks; 16. The quartic potential: instantons; 17. Compact degrees of freedom; Index.

  16. The Path of Carbon in Photosynthesis VI.

    DOE R&D Accomplishments Database

    Calvin, M.

    1949-06-30

    This paper is a compilation of the essential results of our experimental work in the determination of the path of carbon in photosynthesis. There are discussions of the dark fixation of photosynthesis and methods of separation and identification including paper chromatography and radioautography. The definition of the path of carbon in photosynthesis by the distribution of radioactivity within the compounds is described.

  17. A Unifying Probability Example.

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.

    2002-01-01

    Presents an example from probability and statistics that ties together several topics including the mean and variance of a discrete random variable, the binomial distribution and its particular mean and variance, the sum of independent random variables, the mean and variance of the sum, and the central limit theorem. Uses Excel to illustrate these…

  18. Varga: On Probability.

    ERIC Educational Resources Information Center

    Varga, Tamas

    This booklet resulted from a 1980 visit by the author, a Hungarian mathematics educator, to the Teachers' Center Project at Southern Illinois University at Edwardsville. Included are activities and problems that make probablility concepts accessible to young children. The topics considered are: two probability games; choosing two beads; matching…

  19. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  20. Deciphering P-T paths in metamorphic rocks involving zoned minerals using quantified maps (XMapTools software) and thermodynamics methods: Examples from the Alps and the Himalaya.

    NASA Astrophysics Data System (ADS)

    Lanari, P.; Vidal, O.; Schwartz, S.; Riel, N.; Guillot, S.; Lewin, E.

    2012-04-01

    Metamorphic rocks are made by mosaic of local thermodynamic equilibria involving minerals that grew at different temporal, pressure (P) and temperature (T) conditions. These local (in space but also in time) equilibria can be identified using micro-structural and textural criteria, but also tested using multi-equilibrium techniques. However, linking deformation with metamorphic conditions requires spatially continuous estimates of P and T conditions in least two dimensions (P-T maps), which can be superimposed to the observed structures of deformation. To this end, we have developed a new Matlab-based GUI software for microprobe X-ray map processing (XMapTools, http://www.xmaptools.com) based on the quantification method of De Andrade et al. (2006). XMapTools software includes functions for quantification processing, two chemical modules (Chem2D, Triplot3D), the structural formula functions for common minerals, and more than 50 empirical and semi-empirical geothermobarometers obtained from the literature. XMapTools software can be easily coupled with multi-equilibrium thermobarometric calculations. We will present examples of application for two natural cases involving zoned minerals. The first example is a low-grade metapelite from the paleo-subduction wedge in the Western Alps (Schistes Lustrés unit) that contains only both zoned chlorite and phengite, and also quartz. The second sample is a Himalayan eclogite from the high-pressure unit of Stak (Pakistan) with an eclogitic garnet-omphacite assemblage retrogressed into clinopyroxene-plagioclase-amphibole symplectite, and later into amphibole-biotite during the collisional event under crustal conditions. In both samples, P-T paths were recovered using multi-equilibrium, or semi-empirical geothermobarometers included in the XMapTools package. The results will be compared and discussed with pseudosections calculated with the sample bulk composition and with different local bulk rock compositions estimated with XMap

  1. Optimizing Probability of Detection Point Estimate Demonstration

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2017-01-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-18231and associated mh18232POD software gives most common methods of POD analysis. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. The paper provides discussion on optimizing probability of detection (POD) demonstration experiments using Point Estimate Method. POD Point estimate method is used by NASA for qualifying special NDE procedures. The point estimate method uses binomial distribution for probability density. Normally, a set of 29 flaws of same size within some tolerance are used in the demonstration. The optimization is performed to provide acceptable value for probability of passing demonstration (PPD) and achieving acceptable value for probability of false (POF) calls while keeping the flaw sizes in the set as small as possible.

  2. Understanding Y haplotype matching probability.

    PubMed

    Brenner, Charles H

    2014-01-01

    The Y haplotype population-genetic terrain is better explored from a fresh perspective rather than by analogy with the more familiar autosomal ideas. For haplotype matching probabilities, versus for autosomal matching probabilities, explicit attention to modelling - such as how evolution got us where we are - is much more important while consideration of population frequency is much less so. This paper explores, extends, and explains some of the concepts of "Fundamental problem of forensic mathematics - the evidential strength of a rare haplotype match". That earlier paper presented and validated a "kappa method" formula for the evidential strength when a suspect matches a previously unseen haplotype (such as a Y-haplotype) at the crime scene. Mathematical implications of the kappa method are intuitive and reasonable. Suspicions to the contrary raised in rest on elementary errors. Critical to deriving the kappa method or any sensible evidential calculation is understanding that thinking about haplotype population frequency is a red herring; the pivotal question is one of matching probability. But confusion between the two is unfortunately institutionalized in much of the forensic world. Examples make clear why (matching) probability is not (population) frequency and why uncertainty intervals on matching probabilities are merely confused thinking. Forensic matching calculations should be based on a model, on stipulated premises. The model inevitably only approximates reality, and any error in the results comes only from error in the model, the inexactness of the approximation. Sampling variation does not measure that inexactness and hence is not helpful in explaining evidence and is in fact an impediment. Alternative haplotype matching probability approaches that various authors have considered are reviewed. Some are based on no model and cannot be taken seriously. For the others, some evaluation of the models is discussed. Recent evidence supports the adequacy of

  3. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  4. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  5. Squeezed states and path integrals

    NASA Technical Reports Server (NTRS)

    Daubechies, Ingrid; Klauder, John R.

    1992-01-01

    The continuous-time regularization scheme for defining phase-space path integrals is briefly reviewed as a method to define a quantization procedure that is completely covariant under all smooth canonical coordinate transformations. As an illustration of this method, a limited set of transformations is discussed that have an image in the set of the usual squeezed states. It is noteworthy that even this limited set of transformations offers new possibilities for stationary phase approximations to quantum mechanical propagators.

  6. Topological Path Planning in GPS Trajectory Data

    PubMed Central

    Corcoran, Padraig

    2016-01-01

    This paper proposes a novel solution to the problem of computing a set of topologically inequivalent paths between two points in a space given a set of samples drawn from that space. Specifically, these paths are homotopy inequivalent where homotopy is a topological equivalence relation. This is achieved by computing a basis for the group of homology inequivalent loops in the space. An additional distinct element is then computed where this element corresponds to a loop which passes through the points in question. The set of paths is subsequently obtained by taking the orbit of this element acted on by the group of homology inequivalent loops. Using a number of spaces, including a street network where the samples are GPS trajectories, the proposed method is demonstrated to accurately compute a set of homotopy inequivalent paths. The applications of this method include path and coverage planning. PMID:28009817

  7. A weighted bootstrap method for the determination of probability density functions of freshwater distribution coefficients (Kds) of Co, Cs, Sr and I radioisotopes.

    PubMed

    Durrieu, G; Ciffroy, P; Garnier, J-M

    2006-11-01

    The objective of the study was to provide global probability density functions (PDFs) representing the uncertainty of distribution coefficients (Kds) in freshwater for radioisotopes of Co, Cs, Sr and I. A comprehensive database containing Kd values referenced in 61 articles was first built and quality scores were affected to each data point according to various criteria (e.g. presentation of data, contact times, pH, solid-to-liquid ratio, expert judgement). A weighted bootstrapping procedure was then set up in order to build PDFs, in such a way that more importance is given to the most relevant data points (i.e. those corresponding to typical natural environments). However, it was also assessed that the relevance and the robustness of the PDFs determined by our procedure depended on the number of Kd values in the database. Owing to the large database, conditional PDFs were also proposed, for site studies where some parametric information is known (e.g. pH, contact time between radionuclides and particles, solid-to-liquid ratio). Such conditional PDFs reduce the uncertainty on the Kd values. These global and conditional PDFs are useful for end-users of dose models because the uncertainty and sensitivity of Kd values are taking into account.

  8. Innovative development path of ethnomedicines: the interpretation of the path.

    PubMed

    Zhu, Zhaoyun; Fu, Dehuan; Gui, Yali; Cui, Tao; Wang, Jingkun; Wang, Ting; Yang, Zhizhong; Niu, Yanfei; She, Zhennan; Wang, Li

    2017-03-01

    One of the primary purposes of the innovative development of ethnomedicines is to use their excellent safety and significant efficacy to serve a broader population. To achieve this purpose, modern scientific and technological means should be referenced, and relevant national laws and regulations as well as technical guides should be strictly followed to develop standards and to perform systemic research in producing ethnomedicines. Finally, ethnomedicines, which are applied to a limited extent in ethnic areas, can be transformed into safe, effective, and quality-controllable medical products to relieve the pain of more patients. The innovative development path of ethnomedicines includes the following three primary stages: resource study, standardized development research, and industrialization of the achievements and efforts for internationalization. The implementation of this path is always guaranteed by the research and development platform and the talent team. This article is based on the accumulation of long-term practice and is combined with the relevant disciplines, laws and regulations, and technical guidance from the research and development of ethnomedicines. The intention is to perform an in-depth analysis and explanation of the major research thinking, methods, contents, and technical paths involved in all stages of the innovative development path of ethnomedicines to provide useful references for the development of proper ethnomedicine use.

  9. Searching with Probabilities

    DTIC Science & Technology

    1983-07-26

    DeGroot , Morris H. Probability and Statistic. Addison-Wesley Publishing Company, Reading, Massachusetts, 1975. [Gillogly 78] Gillogly, J.J. Performance...distribution [ DeGroot 751 has just begun. The beta distribution has several features that might make it a more reasonable choice. As with the normal-based...1982. [Cooley 65] Cooley, J.M. and Tukey, J.W. An algorithm for the machine calculation of complex Fourier series. Math. Comp. 19, 1965. [ DeGroot 75

  10. The Objective Borderline Method (OBM): A Probability-Based Model for Setting up an Objective Pass/Fail Cut-Off Score in Medical Programme Assessments

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-01-01

    The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…

  11. Class probability estimation for medical studies.

    PubMed

    Simon, Richard

    2014-07-01

    I provide a commentary on two papers "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler. Those papers provide an up-to-date review of some popular machine learning methods for class probability estimation and compare those methods to logistic regression modeling in real and simulated datasets.

  12. Effects of combined dimension reduction and tabulation on the simulations of a turbulent premixed flame using a large-eddy simulation/probability density function method

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Pope, Stephen B.

    2014-05-01

    A turbulent lean-premixed propane-air flame stabilised by a triangular cylinder as a flame-holder is simulated to assess the accuracy and computational efficiency of combined dimension reduction and tabulation of chemistry. The computational condition matches the Volvo rig experiments. For the reactive simulation, the Lagrangian Large-Eddy Simulation/Probability Density Function (LES/PDF) formulation is used. A novel two-way coupling approach between LES and PDF is applied to obtain resolved density to reduce its statistical fluctuations. Composition mixing is evaluated by the modified Interaction-by-Exchange with the Mean (IEM) model. A baseline case uses In Situ Adaptive Tabulation (ISAT) to calculate chemical reactions efficiently. Its results demonstrate good agreement with the experimental measurements in turbulence statistics, temperature, and minor species mass fractions. For dimension reduction, 11 and 16 represented species are chosen and a variant of Rate Controlled Constrained Equilibrium (RCCE) is applied in conjunction with ISAT to each case. All the quantities in the comparison are indistinguishable from the baseline results using ISAT only. The combined use of RCCE/ISAT reduces the computational time for chemical reaction by more than 50%. However, for the current turbulent premixed flame, chemical reaction takes only a minor portion of the overall computational cost, in contrast to non-premixed flame simulations using LES/PDF, presumably due to the restricted manifold of purely premixed flame in the composition space. Instead, composition mixing is the major contributor to cost reduction since the mean-drift term, which is computationally expensive, is computed for the reduced representation. Overall, a reduction of more than 15% in the computational cost is obtained.

  13. The Union of Shortest Path Trees of Functional Brain Networks.

    PubMed

    Meier, Jil; Tewarie, Prejaas; Van Mieghem, Piet

    2015-11-01

    Communication between brain regions is still insufficiently understood. Applying concepts from network science has shown to be successful in gaining insight in the functioning of the brain. Recent work has implicated that especially shortest paths in the structural brain network seem to play a major role in the communication within the brain. So far, for the functional brain network, only the average length of the shortest paths has been analyzed. In this article, we propose to construct the union of shortest path trees (USPT) as a new topology for the functional brain network. The minimum spanning tree, which has been successful in a lot of recent studies to comprise important features of the functional brain network, is always included in the USPT. After interpreting the link weights of the functional brain network as communication probabilities, the USPT of this network can be uniquely defined. Using data from magnetoencephalography, we applied the USPT as a method to find differences in the network topology of multiple sclerosis patients and healthy controls. The new concept of the USPT of the functional brain network also allows interesting interpretations and may represent the highways of the brain.

  14. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  15. Radiometric measurements of gap probability in conifer tree canopies

    NASA Technical Reports Server (NTRS)

    Albers, Bryan J.; Strahler, Alan H.; Li, Xiaowen; Liang, Shunlin; Clarke, Keith C.

    1990-01-01

    Measurements of gap probability were made for some moderate-sized, open-grown conifers of varying species. Results of the radiometric analysis show that the gap probability, which is taken as the mean of the binomial, fits well a negative exponential function of a path length. The conifer shadow, then, is an object of almost uniform darkness with some bright holes or gaps that are found near the shadow's edge and rapidly disappear toward the shadows center.

  16. Retrieve Tether Survival Probability

    DTIC Science & Technology

    2007-11-02

    cuts of the tether by meteorites and orbital debris , is calculated to be 99.934% for the planned experiment duration of six months or less. This is...due to the unlikely event of a strike by a large piece of orbital debris greater than 1 meter in size cutting all the lines of the tether at once. The...probability of the tether surviving multiple cuts by meteoroid and orbital debris impactors smaller than 5 cm in diameter is 99.9993% at six months

  17. Probability Issues in without Replacement Sampling

    ERIC Educational Resources Information Center

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  18. Experience matters: information acquisition optimizes probability gain.

    PubMed

    Nelson, Jonathan D; McKenzie, Craig R M; Cottrell, Garrison W; Sejnowski, Terrence J

    2010-07-01

    Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information-information gain, Kullback-Liebler distance, probability gain (error minimization), and impact-are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects' information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects' preference for probability gain is robust, suggesting that the other models contribute little to subjects' search behavior.

  19. Path similarity skeleton graph matching.

    PubMed

    Bai, Xiang; Latecki, Longin Jan

    2008-07-01

    This paper presents a novel framework to for shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the shortest paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we completely ignore the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of shortest paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with Discrete Curve Evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and occlusion.

  20. Quantifying model uncertainty in dynamical systems driven by non-Gaussian Lévy stable noise with observations on mean exit time or escape probability

    NASA Astrophysics Data System (ADS)

    Gao, Ting; Duan, Jinqiao

    2016-10-01

    Complex systems are sometimes subject to non-Gaussian α-stable Lévy fluctuations. A new method is devised to estimate the uncertain parameter α and other system parameters, using observations on mean exit time or escape probability for the system evolution. It is based on solving an inverse problem for a deterministic, nonlocal partial differential equation via numerical optimization. The existing methods for estimating parameters require observations on system state sample paths for long time periods or probability densities at large spatial ranges. The method proposed here, instead, requires observations on mean exit time or escape probability only for an arbitrarily small spatial domain. This new method is beneficial to systems for which mean exit time or escape probability is feasible to observe.

  1. Path Tracking Using Simple Planar curves

    DTIC Science & Technology

    1992-03-01

    identify by block number) FIELD IGROUP SUB-GROUP Path Planning, Obstacle Avoidance, Autonomous Vehicle Motion 19. ABSTRACT (Continue on reverse if...algorithm, the method shall be incorporated into a robot’s software system. This path tracking method will lay the groundwork for a dynamic obstacle ...dynamic obstacle avoidance system for a mobile robot. Accesion For NTIS CRA& L U,.a i.O,,-.ed l ju.-Affcation o........................ By D:;t ibutioa i

  2. A Path to Discovery

    ERIC Educational Resources Information Center

    Stegemoller, William; Stegemoller, Rebecca

    2004-01-01

    The path taken and the turns made as a turtle traces a polygon are examined to discover an important theorem in geometry. A unique tool, the Angle Adder, is implemented in the investigation. (Contains 9 figures.)

  3. Tortuous path chemical preconcentrator

    DOEpatents

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  4. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  5. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  6. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  7. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  8. Photodissociation of Cl 2 in helium clusters: an application of hybrid method of quantum wavepacket dynamics and path integral centroid molecular dynamics

    NASA Astrophysics Data System (ADS)

    Takayanagi, Toshiyuki; Shiga, Motoyuki

    2003-04-01

    The photodissociation dynamics of Cl 2 embedded in helium clusters is studied by numerical simulation with an emphasis on the effect of quantum character of helium motions. The simulation is based on the hybrid model in which Cl-Cl internuclear dynamics is treated in a wavepacket technique, while the helium motions are described by a path integral centroid molecular dynamics approach. It is found that the cage effect largely decreases when the helium motion is treated quantum mechanically. The mechanism is affected not only by the zero-point vibration in the helium solvation structure, but also by the quantum dynamics of helium.

  9. Converging Towards the Optimal Path to Extinction

    DTIC Science & Technology

    2011-01-01

    to extinction is the path that minimizes the action in either the Hamiltonian or Lagrangian representation. We compute the trajectory satisfying the...probability distribution, which falls steeply away from the steady state. This approximation leads to a conserved quan- tity that is called the Hamiltonian ...35]. From the Hamiltonian , one can find a set of conservative ordinary differential equations (ODEs) that are known as Hamilton’s equations. These

  10. People's conditional probability judgments follow probability theory (plus noise).

    PubMed

    Costello, Fintan; Watts, Paul

    2016-09-01

    A common view in current psychology is that people estimate probabilities using various 'heuristics' or rules of thumb that do not follow the normative rules of probability theory. We present a model where people estimate conditional probabilities such as P(A|B) (the probability of A given that B has occurred) by a process that follows standard frequentist probability theory but is subject to random noise. This model accounts for various results from previous studies of conditional probability judgment. This model predicts that people's conditional probability judgments will agree with a series of fundamental identities in probability theory whose form cancels the effect of noise, while deviating from probability theory in other expressions whose form does not allow such cancellation. Two experiments strongly confirm these predictions, with people's estimates on average agreeing with probability theory for the noise-cancelling identities, but deviating from probability theory (in just the way predicted by the model) for other identities. This new model subsumes an earlier model of unconditional or 'direct' probability judgment which explains a number of systematic biases seen in direct probability judgment (Costello & Watts, 2014). This model may thus provide a fully general account of the mechanisms by which people estimate probabilities.

  11. Nonholonomic catheter path reconstruction using electromagnetic tracking

    NASA Astrophysics Data System (ADS)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  12. Evaluation of the Probability of Non-sentinel Lymph Node Metastasis in Breast Cancer Patients with Sentinel Lymph Node Metastasis using Two Different Methods

    PubMed Central

    Başoğlu, İrfan; Çelik, Muhammet Ferhat; Dural, Ahmet Cem; Ünsal, Mustafa Gökhan; Akarsu, Cevher; Baytekin, Halil Fırat; Kapan, Selin; Alış, Halil

    2015-01-01

    Objective The aim of this retrospective clinical study was to evaluate the accuracy and feasibility of two different clinical scales, namely the Memorial Sloan-Kettering Cancer Center (MSKCC) nomogram and Tenon’s axillary scoring system, which were developed for predicting the non-sentinel lymph node (NSLN) status in our breast cancer patients. Material and Methods The medical records of patients who were diagnosed with breast cancer between January 2010 and November 2013 were reviewed. Those who underwent sentinel lymph node biopsy (SLNB) for axillary staging were recruited for the study, and patients who were found to have positive SLNB and thus were subsequently subjected to axillary lymph node dissection (ALND) were also included. Patients who had neoadjuvant therapy, who had clinically positive axilla, and who had stage 4 disease were excluded. Patients were divided into two groups. Group 1 included those who had negative NSLNs, whereas Group 2 included those who had positive NSLNs. The following data were collected: age, tumor size, histopathological characteristics of the tumor, presence of lymphovascular invasion, presence of multifocality, number of negative and positive NSLNs, size of metastases, histopathological method used to define metastases, and receptor status of the tumor. The score of each patient was calculated according to the MSKCC nomogram and Tenon’s axillary scoring system. Statistical analysis was conducted to investigate the correlation between the scores and the involvement of NSLNs. Results The medical records of patients who were diagnosed with breast cancer and found to have SLNB for axillary staging was reviewed. Finally, 50 patients who had positive SLNB and thus were subsequently subjected to ALND were included in the study. There were 17 and 33 patients in Groups 1 and 2, respectively. Both the MSKCC nomogram and Tenon’s axillary scoring system were demonstrated to be significantly accurate in the prediction of the

  13. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy

    PubMed Central

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-01-01

    Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717

  14. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGES

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This papermore » develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  15. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    SciTech Connect

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; Neary, Vincent S.

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.

  16. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  17. Assessment of the probability of contaminating Mars

    NASA Technical Reports Server (NTRS)

    Judd, B. R.; North, D. W.; Pezier, J. P.

    1974-01-01

    New methodology is proposed to assess the probability that the planet Mars will by biologically contaminated by terrestrial microorganisms aboard a spacecraft. Present NASA methods are based on the Sagan-Coleman formula, which states that the probability of contamination is the product of the expected microbial release and a probability of growth. The proposed new methodology extends the Sagan-Coleman approach to permit utilization of detailed information on microbial characteristics, the lethality of release and transport mechanisms, and of other information about the Martian environment. Three different types of microbial release are distinguished in the model for assessing the probability of contamination. The number of viable microbes released by each mechanism depends on the bio-burden in various locations on the spacecraft and on whether the spacecraft landing is accomplished according to plan. For each of the three release mechanisms a probability of growth is computed, using a model for transport into an environment suited to microbial growth.

  18. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  19. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  20. Comparison of methods for transfer of calibration models in near-infared spectroscopy: a case study based on correcting path length differences using fiber-optic transmittance probes in in-line near-infrared spectroscopy.

    PubMed

    Sahni, Narinder Singh; Isaksson, Tomas; Naes, Tormod

    2005-04-01

    This article addresses problems related to transfer of calibration models due to variations in distance between the transmittance fiber-optic probes. The data have been generated using a mixture design and measured at five different probe distances. A number of techniques reported in the literature have been compared. These include multiplicative scatter correction (MSC), path length correction (PLC), finite impulse response (FIR), orthogonal signal correction (OSC), piecewise direct standardization (PDS), and robust calibration. The quality of the predictions was expressed in terms of root mean square error of prediction (RMSEP). Robust calibration gave good calibration transfer results, while the other methods did not give acceptable results.

  1. Coherent Assessment of Subjective Probability

    DTIC Science & Technology

    1981-03-01

    known results of de Finetti (1937, 1972, 1974), Smith (1961), and Savage (1971) and some recent results of Lind- ley (1980) concerning the use of...provides the motivation for de Finettis definition of subjective probabilities as coherent bet prices. From the definition of the probability measure...subjective probability, the probability laws which are traditionally stated as axioms or definitions are obtained instead as theorems. (De Finetti F -7

  2. Improved initial guess for minimum energy path calculations

    SciTech Connect

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt

    2014-06-07

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  3. An Unplanned Path

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  4. Gas path seal

    NASA Technical Reports Server (NTRS)

    Bill, R. C.; Johnson, R. D. (Inventor)

    1979-01-01

    A gas path seal suitable for use with a turbine engine or compressor is described. A shroud wearable or abradable by the abrasion of the rotor blades of the turbine or compressor shrouds the rotor bades. A compliant backing surrounds the shroud. The backing is a yieldingly deformable porous material covered with a thin ductile layer. A mounting fixture surrounds the backing.

  5. Path Integral Approach to Atomic Collisions

    NASA Astrophysics Data System (ADS)

    Harris, Allison

    2016-09-01

    The Path Integral technique is an alternative formulation of quantum mechanics that is based on a Lagrangian approach. In its exact form, it is completely equivalent to the Hamiltonian-based Schrödinger equation approach. Developed by Feynman in the 1940's, following inspiration from Dirac, the path integral approach has been widely used in high energy physics, quantum field theory, and statistical mechanics. However, only in limited cases has the path integral approach been applied to quantum mechanical few-body scattering. We present a theoretical and computational development of the path integral method for use in the study of atomic collisions. Preliminary results are presented for some simple systems. Ultimately, this approach will be applied to few-body ion-atom collisions. Work supported by NSF grant PHY-1505217.

  6. Probabilities of transversions and transitions.

    PubMed

    Vol'kenshtein, M V

    1976-01-01

    The values of the mean relative probabilities of transversions and transitions have been refined on the basis of the data collected by Jukes and found to be equal to 0.34 and 0.66, respectively. Evolutionary factors increase the probability of transversions to 0.44. The relative probabilities of individual substitutions have been determined, and a detailed classification of the nonsense mutations has been given. Such mutations are especially probable in the UGG (Trp) codon. The highest probability of AG, GA transitions correlates with the lowest mean change in the hydrophobic nature of the amino acids coded.

  7. In search of a statistical probability model for petroleum-resource assessment : a critique of the probabilistic significance of certain concepts and methods used in petroleum-resource assessment : to that end, a probabilistic model is sketched

    USGS Publications Warehouse

    Grossling, Bernardo F.

    1975-01-01

    Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then

  8. Communication path for extreme environments

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C. (Inventor); Betts, Bradley J. (Inventor)

    2010-01-01

    Methods and systems for using one or more radio frequency identification devices (RFIDs), or other suitable signal transmitters and/or receivers, to provide a sensor information communication path, to provide location and/or spatial orientation information for an emergency service worker (ESW), to provide an ESW escape route, to indicate a direction from an ESW to an ES appliance, to provide updated information on a region or structure that presents an extreme environment (fire, hazardous fluid leak, underwater, nuclear, etc.) in which an ESW works, and to provide accumulated thermal load or thermal breakdown information on one or more locations in the region.

  9. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  10. Paths of Target Seeking Missiles in Two Dimensions

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E.

    1946-01-01

    Parameters that enter into equation of trajectory of a missile are discussed. Investigation is made of normal pursuit, of constant, proportional, and line--of-sight methods of navigation employing target seeker, and of deriving corresponding pursuit paths. Pursuit paths obtained under similar conditions for different methods are compared. Proportional navigation is concluded to be best method for using target seeker installed in missile.

  11. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative

  12. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  13. Path optimization with limited sensing ability

    SciTech Connect

    Kang, Sung Ha Kim, Seong Jun Zhou, Haomin

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducing its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.

  14. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  15. PATHS groundwater hydrologic model

    SciTech Connect

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  16. The Probabilities of Unique Events

    DTIC Science & Technology

    2012-08-30

    probabilities into quantum mechanics, and some psychologists have argued that they have a role to play in accounting for errors in judgment [30]. But, in...Discussion The mechanisms underlying naive estimates of the probabilities of unique events are largely inaccessible to consciousness , but they...Can quantum probability provide a new direc- tion for cognitive modeling? Behavioral and Brain Sciences (in press). 31. Paolacci G, Chandler J

  17. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  18. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  19. PROBABILITY SURVEYS , CONDITIONAL PROBABILITIES AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  20. Path-breaking schemes for nonequilibrium free energy calculations

    NASA Astrophysics Data System (ADS)

    Chelli, Riccardo; Gellini, Cristina; Pietraperzia, Giangaetano; Giovannelli, Edoardo; Cardini, Gianni

    2013-06-01

    We propose a path-breaking route to the enhancement of unidirectional nonequilibrium simulations for the calculation of free energy differences via Jarzynski's equality [C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997)], 10.1103/PhysRevLett.78.2690. One of the most important limitations of unidirectional nonequilibrium simulations is the amount of realizations necessary to reach suitable convergence of the work exponential average featuring the Jarzynski's relationship. In this respect, a significant improvement of the performances could be obtained by finding a way of stopping trajectories with negligible contribution to the work exponential average, before their normal end. This is achieved using path-breaking schemes which are essentially based on periodic checks of the work dissipated during the pulling trajectories. Such schemes can be based either on breaking trajectories whose dissipated work exceeds a given threshold or on breaking trajectories with a probability increasing with the dissipated work. In both cases, the computer time needed to carry out a series of nonequilibrium trajectories is reduced up to a factor ranging from 2 to more than 10, at least for the processes under consideration in the present study. The efficiency depends on several aspects, such as the type of process, the number of check-points along the pathway and the pulling rate as well. The method is illustrated through radically different processes, i.e., the helix-coil transition of deca-alanine and the pulling of the distance between two methane molecules in water solution.