Science.gov

Sample records for path probability method

  1. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non

  2. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  3. Pattern formation, logistics, and maximum path probability

    NASA Astrophysics Data System (ADS)

    Kirkaldy, J. S.

    1985-05-01

    The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are

  4. Dynamic phase transitions of the Blume-Emery-Griffiths model under an oscillating external magnetic field by the path probability method

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  5. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: Higher order theory based on the Bethe-Peierls and path probability method approximations

    SciTech Connect

    Edison, John R.; Monson, Peter A.

    2014-07-14

    Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.

  6. The Path-of-Probability Algorithm for Steering and Feedback Control of Flexible Needles

    PubMed Central

    Park, Wooram; Wang, Yunfeng; Chirikjian, Gregory S.

    2010-01-01

    In this paper we develop a new framework for path planning of flexible needles with bevel tips. Based on a stochastic model of needle steering, the probability density function for the needle tip pose is approximated as a Gaussian. The means and covariances are estimated using an error propagation algorithm which has second order accuracy. Then we adapt the path-of-probability (POP) algorithm to path planning of flexible needles with bevel tips. We demonstrate how our planning algorithm can be used for feedback control of flexible needles. We also derive a closed-form solution for the port placement problem for finding good insertion locations for flexible needles in the case when there are no obstacles. Furthermore, we propose a new method using reference splines with the POP algorithm to solve the path planning problem for flexible needles in more general cases that include obstacles. PMID:21151708

  7. Flood hazard probability mapping method

    NASA Astrophysics Data System (ADS)

    Kalantari, Zahra; Lyon, Steve; Folkeson, Lennart

    2015-04-01

    In Sweden, spatially explicit approaches have been applied in various disciplines such as landslide modelling based on soil type data and flood risk modelling for large rivers. Regarding flood mapping, most previous studies have focused on complex hydrological modelling on a small scale whereas just a few studies have used a robust GIS-based approach integrating most physical catchment descriptor (PCD) aspects on a larger scale. The aim of the present study was to develop methodology for predicting the spatial probability of flooding on a general large scale. Factors such as topography, land use, soil data and other PCDs were analysed in terms of their relative importance for flood generation. The specific objective was to test the methodology using statistical methods to identify factors having a significant role on controlling flooding. A second objective was to generate an index quantifying flood probability value for each cell, based on different weighted factors, in order to provide a more accurate analysis of potential high flood hazards than can be obtained using just a single variable. The ability of indicator covariance to capture flooding probability was determined for different watersheds in central Sweden. Using data from this initial investigation, a method to subtract spatial data for multiple catchments and to produce soft data for statistical analysis was developed. It allowed flood probability to be predicted from spatially sparse data without compromising the significant hydrological features on the landscape. By using PCD data, realistic representations of high probability flood regions was made, despite the magnitude of rain events. This in turn allowed objective quantification of the probability of floods at the field scale for future model development and watershed management.

  8. Looping probabilities of elastic chains: a path integral approach.

    PubMed

    Cotta-Ramusino, Ludovica; Maddocks, John H

    2010-11-01

    We consider an elastic chain at thermodynamic equilibrium with a heat bath, and derive an approximation to the probability density function, or pdf, governing the relative location and orientation of the two ends of the chain. Our motivation is to exploit continuum mechanics models for the computation of DNA looping probabilities, but here we focus on explaining the novel analytical aspects in the derivation of our approximation formula. Accordingly, and for simplicity, the current presentation is limited to the illustrative case of planar configurations. A path integral formalism is adopted, and, in the standard way, the first approximation to the looping pdf is obtained from a minimal energy configuration satisfying prescribed end conditions. Then we compute an additional factor in the pdf which encompasses the contributions of quadratic fluctuations about the minimum energy configuration along with a simultaneous evaluation of the partition function. The original aspects of our analysis are twofold. First, the quadratic Lagrangian describing the fluctuations has cross-terms that are linear in first derivatives. This, seemingly small, deviation from the structure of standard path integral examples complicates the necessary analysis significantly. Nevertheless, after a nonlinear change of variable of Riccati type, we show that the correction factor to the pdf can still be evaluated in terms of the solution to an initial value problem for the linear system of Jacobi ordinary differential equations associated with the second variation. The second novel aspect of our analysis is that we show that the Hamiltonian form of these linear Jacobi equations still provides the appropriate correction term in the inextensible, unshearable limit that is commonly adopted in polymer physics models of, e.g. DNA. Prior analyses of the inextensible case have had to introduce nonlinear and nonlocal integral constraints to express conditions on the relative displacement of the end

  9. Most probable paths in temporal weighted networks: An application to ocean transport

    NASA Astrophysics Data System (ADS)

    Ser-Giacomi, Enrico; Vasile, Ruggero; Hernández-García, Emilio; López, Cristóbal

    2015-07-01

    We consider paths in weighted and directed temporal networks, introducing tools to compute sets of paths of high probability. We quantify the relative importance of the most probable path between two nodes with respect to the whole set of paths and to a subset of highly probable paths that incorporate most of the connection probability. These concepts are used to provide alternative definitions of betweenness centrality. We apply our formalism to a transport network describing surface flow in the Mediterranean sea. Despite the full transport dynamics is described by a very large number of paths we find that, for realistic time scales, only a very small subset of high probability paths (or even a single most probable one) is enough to characterize global connectivity properties of the network.

  10. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  11. Reconstructing the Most Probable Folding Transition Path from Replica Exchange Molecular Dynamics Simulations.

    PubMed

    Jimenez-Cruz, Camilo Andres; Garcia, Angel E

    2013-08-13

    The characterization of transition pathways between long-lived states, and the identification of the corresponding transition state ensembles are useful tools in the study of rare events such as protein folding. In this work we demonstrate how the most probable transition path between metastable states can be recovered from replica exchange molecular dynamic simulation data by using the dynamic string method. The local drift vector in collective variables is determined via short continuous trajectories between replica exchanges at a given temperature, and points along the string are updated based on this drift vector to produce reaction pathways between the folded and unfolded state. The method is applied to a designed beta hairpin-forming peptide to obtain information on the folding mechanism and transition state using different sets of collective variables at various temperatures. Two main folding pathways differing in the order of events are found and discussed, and the relative free energy differences for each path estimated. Finally, the structures near the transition state are found and described. PMID:26584126

  12. Computational methods for probability of instability calculations

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  13. Monte Carlo methods to calculate impact probabilities

    NASA Astrophysics Data System (ADS)

    Rickman, H.; Wiśniowski, T.; Wajer, P.; Gabryszewski, R.; Valsecchi, G. B.

    2014-09-01

    Context. Unraveling the events that took place in the solar system during the period known as the late heavy bombardment requires the interpretation of the cratered surfaces of the Moon and terrestrial planets. This, in turn, requires good estimates of the statistical impact probabilities for different source populations of projectiles, a subject that has received relatively little attention, since the works of Öpik (1951, Proc. R. Irish Acad. Sect. A, 54, 165) and Wetherill (1967, J. Geophys. Res., 72, 2429). Aims: We aim to work around the limitations of the Öpik and Wetherill formulae, which are caused by singularities due to zero denominators under special circumstances. Using modern computers, it is possible to make good estimates of impact probabilities by means of Monte Carlo simulations, and in this work, we explore the available options. Methods: We describe three basic methods to derive the average impact probability for a projectile with a given semi-major axis, eccentricity, and inclination with respect to a target planet on an elliptic orbit. One is a numerical averaging of the Wetherill formula; the next is a Monte Carlo super-sizing method using the target's Hill sphere. The third uses extensive minimum orbit intersection distance (MOID) calculations for a Monte Carlo sampling of potentially impacting orbits, along with calculations of the relevant interval for the timing of the encounter allowing collision. Numerical experiments are carried out for an intercomparison of the methods and to scrutinize their behavior near the singularities (zero relative inclination and equal perihelion distances). Results: We find an excellent agreement between all methods in the general case, while there appear large differences in the immediate vicinity of the singularities. With respect to the MOID method, which is the only one that does not involve simplifying assumptions and approximations, the Wetherill averaging impact probability departs by diverging toward

  14. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  15. Multi-Level Indoor Path Planning Method

    NASA Astrophysics Data System (ADS)

    Xiong, Q.; Zhu, Q.; Zlatanova, S.; Du, Z.; Zhang, Y.; Zeng, L.

    2015-05-01

    Indoor navigation is increasingly widespread in complex indoor environments, and indoor path planning is the most important part of indoor navigation. Path planning generally refers to finding the most suitable path connecting two locations, while avoiding collision with obstacles. However, it is a fundamental problem, especially for 3D complex building model. A common way to solve the issue in some applications has been approached in a number of relevant literature, which primarily operates on 2D drawings or building layouts, possibly with few attached attributes for obstacles. Although several digital building models in the format of 3D CAD have been used for path planning, they usually contain only geometric information while losing abundant semantic information of building components (e.g. types and attributes of building components and their simple relationships). Therefore, it becomes important to develop a reliable method that can enhance application of path planning by combining both geometric and semantic information of building components. This paper introduces a method that support 3D indoor path planning with semantic information.

  16. ''Latest Capabilities of Pov-Ray Ricochet Flight Path Analysis & Impact Probability Prediction Software''

    SciTech Connect

    Price, D.E.; Brereton, S.; Newton, M.; Moore, B.; Muirhead, D.; Pastrnak, J.; Prokosch, D.; Spence, B.; Towle, R.

    2000-09-05

    POV-Ray Ricochet Tracker is a freeware computer code developed to analyze high-speed fragment ricochet trajectory paths in complex 3-D areas such as explosives tiring chambers, facility equipment rooms, or shipboard Command and Control Centers. The code analyzes as many as millions of individual fragment trajectory paths in three dimensions and tracks these trajectory paths for up to four bounces through the three-dimensional model. It allows determination of the probabilities of hitting any designated areas or objects in the model. It creates renderings of any ricochet flight paths of interest in photo realistic renderings of the 3-D model. POV-Ray Ricochet Tracker is a customized version of the Persistence of Vision{trademark} Ray-Tracer (POV-Ray{trademark}) version 3.02 code for the Macintosh{trademark} Operating System (MacOS{trademark}). POV-Ray is a third generation graphics engine that creates three-dimensional, very high quality (photo-realistic) images with realistic reflections, shading, textures, perspective, and other effects using a rendering technique called ray-tracing. It reads a text tile that describes the objects, lighting, and camera location in a scene and generates an image of that scene from the viewpoint of the camera. More information about POV-Ray, including the executable and source code, may be found at http://www.povray.org. The customized code (POV-Ray Shrapnel Tracker, V3.02-Custom Build 2) generates individual fragment trajectory paths at any desired angle intervals in three dimensions. The code tracks these trajectory paths through any complex three-dimensional space, and outputs detailed data for each ray as requested by the user. The output may include trajectory source location, initial direction of each trajectory, vector data for each bounce point, and any impacts with designated model target surfaces during any trajectory segment (direct path or reflected paths). This allows determination of the three-dimensional trajectory of

  17. Path integral method for DNA denaturation

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2009-04-01

    The statistical physics of homogeneous DNA is investigated by the imaginary time path integral formalism. The base pair stretchings are described by an ensemble of paths selected through a macroscopic constraint, the fulfillment of the second law of thermodynamics. The number of paths contributing to the partition function strongly increases around and above a specific temperature Tc∗ , whereas the fraction of unbound base pairs grows continuously around and above Tc∗ . The latter is identified with the denaturation temperature. Thus, the separation of the two complementary strands appears as a highly cooperative phenomenon displaying a smooth crossover versus T . The thermodynamical properties have been computed in a large temperature range by varying the size of the path ensemble at the lower bound of the range. No significant physical dependence on the system size has been envisaged. The entropy grows continuously versus T while the specific heat displays a remarkable peak at Tc∗ . The location of the peak versus T varies with the stiffness of the anharmonic stacking interaction along the strand. The presented results suggest that denaturation in homogeneous DNA has the features of a second-order phase transition. The method accounts for the cooperative behavior of a very large number of degrees of freedom while the computation time is kept within a reasonable limit.

  18. Do-It-Yourself Critical Path Method.

    ERIC Educational Resources Information Center

    Morris, Edward P., Jr.

    This report describes the critical path method (CPM), a system for planning and scheduling work to get the best time-cost combination for any particular job. With the use of diagrams, the report describes how CPM works on a step-by-step basis. CPM uses a network to show which parts of a job must be done and how they would eventually fit together…

  19. Path Integral Monte Carlo Methods for Fermions

    NASA Astrophysics Data System (ADS)

    Ethan, Ethan; Dubois, Jonathan; Ceperley, David

    2014-03-01

    In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.

  20. Method of path coefficients: a trademark of Sewall Wright.

    PubMed

    Li, C C

    1991-02-01

    This address is a tribute to a pioneer of population genetics, including human population genetics. The unique methodology employed by Sewall Wright in many genetic problems is the method of path coefficients. This essay traces the historical landmarks in the development of the path method and then shows how some of the conventional statistical results can be converted into expressions involving path coefficients. The construction of a path diagram to represent such statistical results is explained in terms of examples. In the last section an example of applying the path method to the problem of genetic linkage in a random mating population is given. I hope that, despite the ending of the Sewall Wright era, the path method will continue to serve the scientific world. PMID:2004740

  1. Computing tunneling paths with the Hamilton-Jacobi equation and the fast marching method

    NASA Astrophysics Data System (ADS)

    Dey, Bijoy K.; Ayers, Paul W.

    We present a new method for computing the most probable tunneling paths based on the minimum imaginary action principle. Unlike many conventional methods, the paths are calculated without resorting to an optimization (minimization) scheme. Instead, a fast marching method coupled with a back-propagation scheme is used to efficiently compute the tunneling paths. The fast marching method solves a Hamilton-Jacobi equation for the imaginary action on a discrete grid where the action value at an initial point (usually the reactant state configuration) is known in the beginning. Subsequently, a back-propagation scheme uses a steepest descent method on the imaginary action surface to compute a path connecting an arbitrary point on the potential energy surface (usually a state in the product valley) to the initial state. The proposed method is demonstrated for the tunneling paths of two different systems: a model 2D potential surface and the collinear reaction. Unlike existing methods, where the tunneling path is based on a presumed reaction coordinate and a correction is made with respect to the reaction coordinate within an 'adiabatic' approximation, the proposed method is very general and makes no assumptions about the relationship between the reaction coordinate and tunneling path.

  2. Exact transition probabilities in a 6-state Landau-Zener system with path interference

    NASA Astrophysics Data System (ADS)

    Sinitsyn, N. A.

    2015-05-01

    We identify a nontrivial multistate Landau-Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. We discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.

  3. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  4. Guide waves-based multi-damage identification using a local probability-based diagnostic imaging method

    NASA Astrophysics Data System (ADS)

    Gao, Dongyue; Wu, Zhanjun; Yang, Lei; Zheng, Yuebin

    2016-04-01

    Multi-damage identification is an important and challenging task in the research of guide waves-based structural health monitoring. In this paper, a multi-damage identification method is presented using a guide waves-based local probability-based diagnostic imaging (PDI) method. The method includes a path damage judgment stage, a multi-damage judgment stage and a multi-damage imaging stage. First, damage imaging was performed by partition. The damage imaging regions are divided into beside damage signal paths. The difference in guide waves propagation characteristics between cross and beside damage paths is proposed by theoretical analysis of the guide wave signal feature. The time-of-flight difference of paths is used as a factor to distinguish between cross and beside damage paths. Then, a global PDI method (damage identification using all paths in the sensor network) is performed using the beside damage path network. If the global PDI damage zone crosses the beside damage path, it means that the discrete multi-damage model (such as a group of holes or cracks) has been misjudged as a continuum single-damage model (such as a single hole or crack) by the global PDI method. Subsequently, damage imaging regions are separated by beside damage path and local PDI (damage identification using paths in the damage imaging regions) is performed in each damage imaging region. Finally, multi-damage identification results are obtained by superimposing the local damage imaging results and the marked cross damage paths. The method is employed to inspect the multi-damage in an aluminum plate with a surface-mounted piezoelectric ceramic sensors network. The results show that the guide waves-based multi-damage identification method is capable of visualizing the presence, quantity and location of structural damage.

  5. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that

  6. Probability Density Function Method for Langevin Equations with Colored Noise

    SciTech Connect

    Wang, Peng; Tartakovsky, Alexandre M.; Tartakovsky, Daniel M.

    2013-04-05

    We present a novel method to derive closed-form, computable PDF equations for Langevin systems with colored noise. The derived equations govern the dynamics of joint or marginal probability density functions (PDFs) of state variables, and rely on a so-called Large-Eddy-Diffusivity (LED) closure. We demonstrate the accuracy of the proposed PDF method for linear and nonlinear Langevin equations, describing the classical Brownian displacement and dispersion in porous media.

  7. Bias annealing: A method for obtaining transition paths de novo

    NASA Astrophysics Data System (ADS)

    Hu, Jie; Ma, Ao; Dinner, Aaron R.

    2006-09-01

    Computational studies of dynamics in complex systems require means for generating reactive trajectories with minimum knowledge about the processes of interest. Here, we introduce a method for generating transition paths when an existing one is not already available. Starting from biased paths obtained from steered molecular dynamics, we use a Monte Carlo procedure in the space of whole trajectories to shift gradually to sampling an ensemble of unbiased paths. Application to basin-to-basin hopping in a two-dimensional model system and nucleotide-flipping by a DNA repair protein demonstrates that the method can efficiently yield unbiased reactive trajectories even when the initial steered dynamics differ significantly. The relation of the method to others and the physical basis for its success are discussed.

  8. The method of modular characteristic direction probabilities in MPACT

    SciTech Connect

    Liu, Z.; Kochunas, B.; Collins, B.; Downar, T.; Wu, H.

    2013-07-01

    The method of characteristic direction probabilities (CDP) is based on a modular ray tracing technique which combines the benefits of the collision probability method (CPM) and the method of characteristics (MOC). This past year CDP was implemented in the transport code MPACT for 2-D and 3-D transport calculations. By only coupling the fine mesh regions passed by the characteristic rays in the particular direction, the scale of the probabilities matrix is much smaller compared to the CPM. At the same time, the CDP has the same capacity of dealing with the complicated geometries with the MOC, because the same modular ray tracing techniques are used. Results from the C5G7 benchmark problems are given for different cases to show the accuracy and efficiency of the CDP compared to MOC. For the cases examined, the CDP and MOC methods were seen to differ in k{sub eff} by about 1-20 pcm, and the computational efficiency of the CDP appears to be better than the MOC for some problems. However, in other problems, particularly when the CDP matrices have to be recomputed from changing cross sections, the CDP does not perform as well. This indicates an area of future work. (authors)

  9. Sequential quadratic programming method for determining the minimum energy path.

    PubMed

    Burger, Steven K; Yang, Weitao

    2007-10-28

    A new method, referred to as the sequential quadratic programming method, is presented for determining minimum energy paths. The method is based on minimizing the points representing the path in the subspace perpendicular to the tangent of the path while using a penalty term to prevent kinks from forming. Rather than taking one full step, the minimization is divided into a number of sequential steps on an approximate quadratic surface. The resulting method can efficiently determine the reaction mechanism, from which transition state can be easily identified and refined with other methods. To improve the resolution of the path close to the transition state, points are clustered close to this region with a reparametrization scheme. The usefulness of the algorithm is demonstrated for the Muller-Brown potential, amide hydrolysis, and an 89 atom cluster taken from the active site of 4-oxalocrotonate tautomerase for the reaction which catalyzes 2-oxo-4-hexenedioate to the intermediate 2-hydroxy-2,4-hexadienedioate. PMID:17979319

  10. THE CRITICAL-PATH METHOD OF CONSTRUCTION CONTROL.

    ERIC Educational Resources Information Center

    DOMBROW, RODGER T.; MAUCHLY, JOHN

    THIS DISCUSSION PRESENTS A DEFINITION AND BRIEF DESCRIPTION OF THE CRITICAL-PATH METHOD AS APPLIED TO BUILDING CONSTRUCTION. INTRODUCING REMARKS CONSIDER THE MOST PERTINENT QUESTIONS PERTAINING TO CPM AND THE NEEDS ASSOCIATED WITH MINIMIZING TIME AND COST ON CONSTRUCTION PROJECTS. SPECIFIC DISCUSSION INCLUDES--(1) ADVANTAGES OF NETWORK TECHNIQUES,…

  11. UV Multi-scatter Propagation Model of Point Probability Method

    NASA Astrophysics Data System (ADS)

    Lu, Bai; Zhensen, Wu; Haiying, Li

    Based on the multi-scatter propagation model of Monte Carlo, an improved geometric model is proposed. The model is ameliorated by using the point probability method. Comparison is made between the multiple scattering propagation models and the single-scatter propagation model in calculation time and relative error. The effect of complex weather, stumbling block and the transmitter and the receiver in different height are discussed. It is shown that although the single-scatter propagation model can be evaluated easily from standard numerical integration but this model cannot describe general non-line-of sight propagation problem. While the improved point probability multi-scatter Monte Carlo model may be used to more general case.

  12. Determination of the inelastic mean free path in ice by examination of tilted vesicles and automated most probable loss imaging.

    PubMed

    Grimm, R; Typke, D; Bärmann, M; Baumeister, W

    1996-07-01

    Using electron microscopy, the thickness of ice-embedded vesicles is estimated examining tilted and untilted views and assuming an ellipsoidal shape of the vesicles that appear to be circular in the untilted view. Another thickness measure is obtained from the ratio of the unfiltered and zero-loss-filtered image intensities of the vesicle. From these two measurements, the mean free path A for inelastic scattering of electrons in ice is calculated as 203 +/- 33 nm for 120 kV acceleration voltage. It is found that vesicles in thin ice films (< or = 1.5 lambda) significantly protrude out of the ice film. Due to surface tension the shape becomes an oblate ellipsoid. In holes covered with a thick ice film (> or = 3 lambda) and strong thickness gradients, vesicles are predominantly found in regions where the ice thickness is appropriate for their size. Also, a way of imaging the most probable loss under low-dose conditions involving thickness measurement is proposed. Even at large ice thicknesses zero-loss filtering always gives better image contrast. Most probable loss imaging can only help where there is no intensity in the zero-loss image, at very large thicknesses (lambda > 8). PMID:8921626

  13. Probability-theoretical analog of the vector Lyapunov function method

    SciTech Connect

    Nakonechnyi, A.N.

    1995-01-01

    The main ideas of the vector Lyapunov function (VLF) method were advanced in 1962 by Bellman and Matrosov. In this method, a Lyapunov function and a comparison equation are constructed for each subsystem. Then the dependences between the subsystems and the effect of external noise are allowed for by constructing a vector Lyapunov function (as a collection of the scalar Lyapunov functions of the subsystems) and an aggregate comparison function for the entire complex system. A probability-theoretical analog of this method for convergence analysis of stochastic approximation processes has been developed. The abstract approach proposed elsewhere eliminates all restrictions on the system phase space, the system trajectories, the class of Lyapunov functions, etc. The analysis focuses only on the conditions that relate sequences of Lyapunov function values with the derivative and ensure a particular type (mode, character) of stability. In our article, we extend this approach to the VLF method for discrete stochastic dynamic systems.

  14. Aircraft Engine Gas Path Diagnostic Methods: Public Benchmarking Results

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Borguet, Sebastien; Leonard, Olivier; Zhang, Xiaodong (Frank)

    2013-01-01

    Recent technology reviews have identified the need for objective assessments of aircraft engine health management (EHM) technologies. To help address this issue, a gas path diagnostic benchmark problem has been created and made publicly available. This software tool, referred to as the Propulsion Diagnostic Method Evaluation Strategy (ProDiMES), has been constructed based on feedback provided by the aircraft EHM community. It provides a standard benchmark problem enabling users to develop, evaluate and compare diagnostic methods. This paper will present an overview of ProDiMES along with a description of four gas path diagnostic methods developed and applied to the problem. These methods, which include analytical and empirical diagnostic techniques, will be described and associated blind-test-case metric results will be presented and compared. Lessons learned along with recommendations for improving the public benchmarking processes will also be presented and discussed.

  15. CPM (Critical Path Method) as a Curriculum Tool.

    ERIC Educational Resources Information Center

    Mongerson, M. Duane

    This document discusses and illustrates the use of the Critical Path Method (CPM) as a tool for developing curriculum. In so doing a brief review of the evolution of CPM as a management tool developed by E. I. duPont de Nemours Company is presented. It is also noted that CPM is only a method of sequencing learning activities and not an end unto…

  16. A method for assessing the arm movement performance: probability tube.

    PubMed

    Kostić, Miloš; Popović, Mirjana B; Popović, Dejan B

    2013-12-01

    Quantification of motor performance is an important component of the rehabilitation of humans with sensory-motor disability. We developed a method for assessing arm movement performance of trainees (patients) termed "probability tube" (PT). PT captures the stochastic characteristics of a desired movement when repeated by an expert (therapist). The PT is being generated automatically from data recorded during point-to-point movement executed not more than 15 repetitions by the clinician and/or other non-expert programmer in just a few minutes. We introduce the index, termed probability tube score (PTS), as a single "goodness-of-fit" value allowing quantified analysis of the recovery and effects of the therapy. This index in fact scores the difference between the movement (velocity profile) executed by the trainee and the velocity profile of the desired movement (executed by the expert). We document the goodness of the automatic method with results from studies which included healthy subjects and show the use of the PTS in healthy and post-stroke hemiplegic subjects. PMID:23921787

  17. The conditional risk probability-based seawall height design method

    NASA Astrophysics Data System (ADS)

    Yang, Xing; Hu, Xiaodong; Li, Zhiqing

    2015-11-01

    The determination of the required seawall height is usually based on the combination of wind speed (or wave height) and still water level according to a specified return period, e.g., 50-year return period wind speed and 50-year return period still water level. In reality, the two variables are be partially correlated. This may be lead to over-design (costs) of seawall structures. The above-mentioned return period for the design of a seawall depends on economy, society and natural environment in the region. This means a specified risk level of overtopping or damage of a seawall structure is usually allowed. The aim of this paper is to present a conditional risk probability-based seawall height design method which incorporates the correlation of the two variables. For purposes of demonstration, the wind speeds and water levels collected from Jiangsu of China are analyzed. The results show this method can improve seawall height design accuracy.

  18. Probability of detection models for eddy current NDE methods

    SciTech Connect

    Rajesh, S.N.

    1993-04-30

    The development of probability of detection (POD) models for a variety of nondestructive evaluation (NDE) methods is motivated by a desire to quantify the variability introduced during the process of testing. Sources of variability involved in eddy current methods of NDE include those caused by variations in liftoff, material properties, probe canting angle, scan format, surface roughness and measurement noise. This thesis presents a comprehensive POD model for eddy current NDE. Eddy current methods of nondestructive testing are used widely in industry to inspect a variety of nonferromagnetic and ferromagnetic materials. The development of a comprehensive POD model is therefore of significant importance. The model incorporates several sources of variability characterized by a multivariate Gaussian distribution and employs finite element analysis to predict the signal distribution. The method of mixtures is then used for estimating optimal threshold values. The research demonstrates the use of a finite element model within a probabilistic framework to the spread in the measured signal for eddy current nondestructive methods. Using the signal distributions for various flaw sizes the POD curves for varying defect parameters have been computed. In contrast to experimental POD models, the cost of generating such curves is very low and complex defect shapes can be handled very easily. The results are also operator independent.

  19. On path-following methods for structural failure problems

    NASA Astrophysics Data System (ADS)

    Stanić, Andjelka; Brank, Boštjan; Korelc, Jože

    2016-08-01

    We revisit the consistently linearized path-following method that can be applied in the nonlinear finite element analysis of solids and structures in order to compute a solution path. Within this framework, two constraint equations are considered: a quadratic one (that includes as special cases popular spherical and cylindrical forms of constraint equation), and another one that constrains only one degree-of-freedom (DOF), the critical DOF. In both cases, the constrained DOFs may vary from one solution increment to another. The former constraint equation is successful in analysing geometrically nonlinear and/or standard inelastic problems with snap-throughs, snap-backs and bifurcation points. However, it cannot handle problems with the material softening that are computed e.g. by the embedded-discontinuity finite elements. This kind of problems can be solved by using the latter constraint equation. The plusses and minuses of the both presented constraint equations are discussed and illustrated on a set of numerical examples. Some of the examples also include direct computation of critical points and branch switching. The direct computation of the critical points is performed in the framework of the path-following method by using yet another constraint function, which is eigenvector-free and suited to detect critical points.

  20. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    NASA Technical Reports Server (NTRS)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  1. Numerical methods for high-dimensional probability density function equations

    NASA Astrophysics Data System (ADS)

    Cho, H.; Venturi, D.; Karniadakis, G. E.

    2016-01-01

    In this paper we address the problem of computing the numerical solution to kinetic partial differential equations involving many phase variables. These types of equations arise naturally in many different areas of mathematical physics, e.g., in particle systems (Liouville and Boltzmann equations), stochastic dynamical systems (Fokker-Planck and Dostupov-Pugachev equations), random wave theory (Malakhov-Saichev equations) and coarse-grained stochastic systems (Mori-Zwanzig equations). We propose three different classes of new algorithms addressing high-dimensionality: The first one is based on separated series expansions resulting in a sequence of low-dimensional problems that can be solved recursively and in parallel by using alternating direction methods. The second class of algorithms relies on truncation of interaction in low-orders that resembles the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) framework of kinetic gas theory and it yields a hierarchy of coupled probability density function equations. The third class of algorithms is based on high-dimensional model representations, e.g., the ANOVA method and probabilistic collocation methods. A common feature of all these approaches is that they are reducible to the problem of computing the solution to high-dimensional equations via a sequence of low-dimensional problems. The effectiveness of the new algorithms is demonstrated in numerical examples involving nonlinear stochastic dynamical systems and partial differential equations, with up to 120 variables.

  2. Large Eddy Simulation and the Filtered Probability Density Function Method

    NASA Astrophysics Data System (ADS)

    Jones, W. P.; Navarro-Martinez, S.

    2009-12-01

    Recently there is has been increased interest in modelling combustion processes with high-levels of extinction and re-ignition. Such system often lie beyond the scope of conventional single scalar-based models. Large Eddy Simulation (LES) has shown a large potential for describing turbulent reactive systems, though combustion occurs at the smallest unresolved scales of the flow and must be modelled. In the sub-grid Probability Density Function (pdf) method approximations are devised to close the evolution equation for the joint-pdf which is then solved directly. The paper describes such an approach and concerns, in particular, the Eulerian stochastic field method of solving the pdf equation. The paper examines the capabilities of the LES-pdf method in capturing auto-ignition and extinction events in different partially premixed configurations with different fuels (hydrogen, methane and n-heptane). The results show that the LES-pdf formulation can capture different regimes without any parameter adjustments, independent of Reynolds numbers and fuel type.

  3. New methods for calculating short-wave radio paths

    NASA Astrophysics Data System (ADS)

    Popov, A. V.; Tsedilina, E. E.; Cherkashin, Iu. N.

    Recent research on the calculation of short-wave paths at IZMIRAN (the Soviet Institute for the Study of Terrestrial Magnetism, the Ionosphere, and the Propagation of Radio Waves) is reviewed. Particular attention is given to: (1) the development of approximate analytical methods for ray-tracing calculations and for determining the geometrical-optics characteristics of a radio signal in a horizontally irregular ionosphere; (2) investigations of the long-range and short-wave propagation of decametric waves; and (3) the development of a parabolic-equation method for considering diffraction and scattering in a medium with regular and random irregularities.

  4. Parameterizing deep convection using the assumed probability density function method

    SciTech Connect

    Storer, R. L.; Griffin, B. M.; Hoft, Jan; Weber, J. K.; Raut, E.; Larson, Vincent E.; Wang, Minghuai; Rasch, Philip J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method.The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection.These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  5. Parameterizing deep convection using the assumed probability density function method

    NASA Astrophysics Data System (ADS)

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-01

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing is weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.

  6. Parameterizing deep convection using the assumed probability density function method

    DOE PAGESBeta

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2014-06-11

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and mid-latitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  7. Parameterizing deep convection using the assumed probability density function method

    DOE PAGESBeta

    Storer, R. L.; Griffin, B. M.; Höft, J.; Weber, J. K.; Raut, E.; Larson, V. E.; Wang, M.; Rasch, P. J.

    2015-01-06

    Due to their coarse horizontal resolution, present-day climate models must parameterize deep convection. This paper presents single-column simulations of deep convection using a probability density function (PDF) parameterization. The PDF parameterization predicts the PDF of subgrid variability of turbulence, clouds, and hydrometeors. That variability is interfaced to a prognostic microphysics scheme using a Monte Carlo sampling method. The PDF parameterization is used to simulate tropical deep convection, the transition from shallow to deep convection over land, and midlatitude deep convection. These parameterized single-column simulations are compared with 3-D reference simulations. The agreement is satisfactory except when the convective forcing ismore » weak. The same PDF parameterization is also used to simulate shallow cumulus and stratocumulus layers. The PDF method is sufficiently general to adequately simulate these five deep, shallow, and stratiform cloud cases with a single equation set. This raises hopes that it may be possible in the future, with further refinements at coarse time step and grid spacing, to parameterize all cloud types in a large-scale model in a unified way.« less

  8. An adaptation of Krylov subspace methods to path following

    SciTech Connect

    Walker, H.F.

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  9. On the orthogonalised reverse path method for nonlinear system identification

    NASA Astrophysics Data System (ADS)

    Muhamad, P.; Sims, N. D.; Worden, K.

    2012-09-01

    The problem of obtaining the underlying linear dynamic compliance matrix in the presence of nonlinearities in a general multi-degree-of-freedom (MDOF) system can be solved using the conditioned reverse path (CRP) method introduced by Richards and Singh (1998 Journal of Sound and Vibration, 213(4): pp. 673-708). The CRP method also provides a means of identifying the coefficients of any nonlinear terms which can be specified a priori in the candidate equations of motion. Although the CRP has proved extremely useful in the context of nonlinear system identification, it has a number of small issues associated with it. One of these issues is the fact that the nonlinear coefficients are actually returned in the form of spectra which need to be averaged over frequency in order to generate parameter estimates. The parameter spectra are typically polluted by artefacts from the identification of the underlying linear system which manifest themselves at the resonance and anti-resonance frequencies. A further problem is associated with the fact that the parameter estimates are extracted in a recursive fashion which leads to an accumulation of errors. The first minor objective of this paper is to suggest ways to alleviate these problems without major modification to the algorithm. The results are demonstrated on numerically-simulated responses from MDOF systems. In the second part of the paper, a more radical suggestion is made, to replace the conditioned spectral analysis (which is the basis of the CRP method) with an alternative time domain decorrelation method. The suggested approach - the orthogonalised reverse path (ORP) method - is illustrated here using data from simulated single-degree-of-freedom (SDOF) and MDOF systems.

  10. New open-path remote optical sensing method to estimate methane emission from soil

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The U.S. EPA recently developed an open-path remote sensing method to identify hot spots and estimate fugitive gas emissions from closed landfills. The method measures several path-integrated concentrations (PICs) of gases using open-path optical instruments. These PICs are then processed using a co...

  11. Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel

    2011-01-01

    Archaeological sites are being compromised or destroyed at a catastrophic rate in most regions of the world. The best solution to this problem is for archaeologists to find and study these sites before they are compromised or destroyed. One way to facilitate the necessary rapid, wide area surveys needed to find these archaeological sites is through the generation of maps of probable archaeological sites from remotely sensed data. We describe an approach for identifying probable locations of archaeological sites over a wide area based on detecting subtle anomalies in vegetative cover through a statistically based analysis of remotely sensed data from multiple sources. We further developed this approach under a recent NASA ROSES Space Archaeology Program project. Under this project we refined and elaborated this statistical analysis to compensate for potential slight miss-registrations between the remote sensing data sources and the archaeological site location data. We also explored data quantization approaches (required by the statistical analysis approach), and we identified a superior data quantization approached based on a unique image segmentation approach. In our presentation we will summarize our refined approach and demonstrate the effectiveness of the overall approach with test data from Santa Catalina Island off the southern California coast. Finally, we discuss our future plans for further improving our approach.

  12. A probability density function method for acoustic field uncertainty analysis

    NASA Astrophysics Data System (ADS)

    James, Kevin R.; Dowling, David R.

    2005-11-01

    Acoustic field predictions, whether analytical or computational, rely on knowledge of the environmental, boundary, and initial conditions. When knowledge of these conditions is uncertain, acoustic field predictions will also be uncertain, even if the techniques for field prediction are perfect. Quantifying acoustic field uncertainty is important for applications that require accurate field amplitude and phase predictions, like matched-field techniques for sonar, nondestructive evaluation, bio-medical ultrasound, and atmospheric remote sensing. Drawing on prior turbulence research, this paper describes how an evolution equation for the probability density function (PDF) of the predicted acoustic field can be derived and used to quantify predicted-acoustic-field uncertainties arising from uncertain environmental, boundary, or initial conditions. Example calculations are presented in one and two spatial dimensions for the one-point PDF for the real and imaginary parts of a harmonic field, and show that predicted field uncertainty increases with increasing range and frequency. In particular, at 500 Hz in an ideal 100 m deep underwater sound channel with a 1 m root-mean-square depth uncertainty, the PDF results presented here indicate that at a range of 5 km, all phases and a 10 dB range of amplitudes will have non-negligible probability. Evolution equations for the two-point PDF are also derived.

  13. Nearest neighbor interaction in the Path Integral Renormalization Group method

    NASA Astrophysics Data System (ADS)

    de Silva, Wasanthi; Clay, R. Torsten

    2014-03-01

    The Path Integral Renormalization Group (PIRG) method is an efficient numerical algorithm for studying ground state properties of strongly correlated electron systems. The many-body ground state wave function is approximated by an optimized linear combination of Slater determinants which satisfies the variational principle. A major advantage of PIRG is that is does not suffer the Fermion sign problem of quantum Monte Carlo. Results are exact in the noninteracting limit and can be enhanced using space and spin symmetries. Many observables can be calculated using Wick's theorem. PIRG has been used predominantly for the Hubbard model with a single on-site Coulomb interaction U. We describe an extension of PIRG to the extended Hubbard model (EHM) including U and a nearest-neighbor interaction V. The EHM is particularly important in models of charge-transfer solids (organic superconductors) and at 1/4-filling drives a charge-ordered state. The presence of lattice frustration also makes studying these systems difficult. We test the method with comparisons to small clusters and long one dimensional chains, and show preliminary results for a coupled-chain model for the (TMTTF)2X materials. This work was supported by DOE grant DE-FG02-06ER46315.

  14. Development of partial failure analysis method in probability risk assessments

    SciTech Connect

    Ni, T.; Modarres, M.

    1996-12-01

    This paper presents a new approach to evaluate the partial failure effect on current Probability Risk Assessments (PRAs). An integrated methodology of the thermal-hydraulic analysis and fuzzy logic simulation using the Dynamic Master Logic Diagram (DMLD) was developed. The thermal-hydraulic analysis used in this approach is to identify partial operation effect of any PRA system function in a plant model. The DMLD is used to simulate the system performance of the partial failure effect and inspect all minimal cut sets of system functions. This methodology can be applied in the context of a full scope PRA to reduce core damage frequency. An example of this application of the approach is presented. The partial failure data used in the example is from a survey study of partial failure effects from the Nuclear Plant Reliability Data System (NPRDS).

  15. Path Integrals and Exotic Options:. Methods and Numerical Results

    NASA Astrophysics Data System (ADS)

    Bormetti, G.; Montagna, G.; Moreni, N.; Nicrosini, O.

    2005-09-01

    In the framework of Black-Scholes-Merton model of financial derivatives, a path integral approach to option pricing is presented. A general formula to price path dependent options on multidimensional and correlated underlying assets is obtained and implemented by means of various flexible and efficient algorithms. As an example, we detail the case of Asian call options. The numerical results are compared with those obtained with other procedures used in quantitative finance and found to be in good agreement. In particular, when pricing at the money (ATM) and out of the money (OTM) options, path integral exhibits competitive performances.

  16. The universal path integral

    NASA Astrophysics Data System (ADS)

    Lloyd, Seth; Dreyer, Olaf

    2016-02-01

    Path integrals calculate probabilities by summing over classical configurations of variables such as fields, assigning each configuration a phase equal to the action of that configuration. This paper defines a universal path integral, which sums over all computable structures. This path integral contains as sub-integrals all possible computable path integrals, including those of field theory, the standard model of elementary particles, discrete models of quantum gravity, string theory, etc. The universal path integral possesses a well-defined measure that guarantees its finiteness. The probabilities for events corresponding to sub-integrals can be calculated using the method of decoherent histories. The universal path integral supports a quantum theory of the universe in which the world that we see around us arises out of the interference between all computable structures.

  17. A K-nearest neighbors survival probability prediction method.

    PubMed

    Lowsky, D J; Ding, Y; Lee, D K K; McCulloch, C E; Ross, L F; Thistlethwaite, J R; Zenios, S A

    2013-05-30

    We introduce a nonparametric survival prediction method for right-censored data. The method generates a survival curve prediction by constructing a (weighted) Kaplan-Meier estimator using the outcomes of the K most similar training observations. Each observation has an associated set of covariates, and a metric on the covariate space is used to measure similarity between observations. We apply our method to a kidney transplantation data set to generate patient-specific distributions of graft survival and to a simulated data set in which the proportional hazards assumption is explicitly violated. We compare the performance of our method with the standard Cox model and the random survival forests method. PMID:23653217

  18. A new method for determining semiclassical tunneling probabilities in atom-diatom reactions

    NASA Astrophysics Data System (ADS)

    Altkorn, Robert I.; Schatz, George C.

    1980-03-01

    We present an approximate semiclassical method for determining state to state transition probabilities for reactions which proceed via tunneling which uses a trajectory integrated along purely real and purely imaginary time contours from reagents through the barrier to products. The real and imaginary time portions of the trajectory are connected by introducing separable approximations to the potential near certain translational turning points in the trajectory. For atom-diatom collinear reactions, the use of a vibrationally adiabatic approximation from these turning points to the asymptotic region leads to a very simple expression for the imaginary part of the action involving a nonseparable contribution from a purely real valued portion of the trajectory passing through the barrier along an imaginary time contour, and a separable contribution from a path which follows part of the locus of outer vibrational turning points. At very low translational energies E0, we find that the nonseparable contribution dominates in determining the reaction probability, and there we find very good agreement with the analogous semiclassical complex trajectory (SCCT) results of George and Miller for collinear H+H2. At higher E0, just below the classical threshold for reaction, the separable contribution dominates, and our method reduces to one proposed by Marcus and Coltrin (MC), which also shows good agreement with the SCCT results. Comparison of our results with exact quantum (EQ) results on both the Porter-Karplus and Truhlar-Kuppermann potential surfaces indicates agreement to within better than a factor of 2.5 over a wide range of relative translational energies (0.04methods. This method is, however, much easier to apply than SCCT (only a real valued portion of a trajectory is used), is capable of determining state to state transition probabilities (in contrast to PT) and

  19. Analyzing methods for path mining with applications in metabolomics.

    PubMed

    Tagore, Somnath; Chowdhury, Nirmalya; De, Rajat K

    2014-01-25

    Metabolomics is one of the key approaches of systems biology that consists of studying biochemical networks having a set of metabolites, enzymes, reactions and their interactions. As biological networks are very complex in nature, proper techniques and models need to be chosen for their better understanding and interpretation. One of the useful strategies in this regard is using path mining strategies and graph-theoretical approaches that help in building hypothetical models and perform quantitative analysis. Furthermore, they also contribute to analyzing topological parameters in metabolome networks. Path mining techniques can be based on grammars, keys, patterns and indexing. Moreover, they can also be used for modeling metabolome networks, finding structural similarities between metabolites, in-silico metabolic engineering, shortest path estimation and for various graph-based analysis. In this manuscript, we have highlighted some core and applied areas of path-mining for modeling and analysis of metabolic networks. PMID:24230973

  20. Nonlinear multi-agent path search method based on OFDM communication

    NASA Astrophysics Data System (ADS)

    Sato, Masatoshi; Igarashi, Yusuke; Tanaka, Mamoru

    This paper presents novel shortest paths searching system based on analog circuit analysis which is called sequential local current comparison method on alternating-current (AC) circuit (AC-SLCC). Local current comparison (LCC) method is a path searching method where path is selected in the direction of the maximum current in a direct-current (DC) resistive circuit. Since a plurality of shortest paths searching by LCC method can be done by solving the current distribution on the resistive circuit analysis, the shortest path problem can be solved at supersonic speed. AC-SLCC method is a novel LCC method with orthogonal frequency division multiplexing (OFDM) communication on AC circuit. It is able to send data with the shortest path and without major data loss, and this suggest the possibility of application to various things (especially OFDM communication techniques).

  1. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    SciTech Connect

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  2. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2015-12-28

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster. PMID:26723645

  3. Path optimization by a variational reaction coordinate method. I. Development of formalism and algorithms

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2015-12-01

    The development of algorithms to optimize reaction pathways between reactants and products is an active area of study. Existing algorithms typically describe the path as a discrete series of images (chain of states) which are moved downhill toward the path, using various reparameterization schemes, constraints, or fictitious forces to maintain a uniform description of the reaction path. The Variational Reaction Coordinate (VRC) method is a novel approach that finds the reaction path by minimizing the variational reaction energy (VRE) of Quapp and Bofill. The VRE is the line integral of the gradient norm along a path between reactants and products and minimization of VRE has been shown to yield the steepest descent reaction path. In the VRC method, we represent the reaction path by a linear expansion in a set of continuous basis functions and find the optimized path by minimizing the VRE with respect to the linear expansion coefficients. Improved convergence is obtained by applying constraints to the spacing of the basis functions and coupling the minimization of the VRE to the minimization of one or more points along the path that correspond to intermediates and transition states. The VRC method is demonstrated by optimizing the reaction path for the Müller-Brown surface and by finding a reaction path passing through 5 transition states and 4 intermediates for a 10 atom Lennard-Jones cluster.

  4. Secondary Path Modeling Method for Active Noise Control of Power Transformer

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Liang, Jiabi; Liang, Yuanbin; Wang, Lixin; Pei, Xiugao; Li, Peng

    The accuracy of the secondary path modeling is critical to the stability of active noise control system. On condition of knowing the input and output of the secondary path, system identification theory can be used to identify the path. Based on the experiment data, correlation analysis is adopted to eliminate the random noise and nonlinear harmonic in the output data in order to obtain the accurate frequency characteristic of the secondary path. After that, Levy's Method is applied to identify the transfer function of the path. Computer simulation results are given respectively, both showing the proposed off-line modeling method is feasible and applicable. At last, Levy's Method is used to attain an accurate secondary path model in the active control of transformer noise experiment and achieves to make the noise sound level decrease about 10dB.

  5. Benchmarking Gas Path Diagnostic Methods: A Public Approach

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Bird, Jeff; Davison, Craig; Volponi, Al; Iverson, R. Eugene

    2008-01-01

    Recent technology reviews have identified the need for objective assessments of engine health management (EHM) technology. The need is two-fold: technology developers require relevant data and problems to design and validate new algorithms and techniques while engine system integrators and operators need practical tools to direct development and then evaluate the effectiveness of proposed solutions. This paper presents a publicly available gas path diagnostic benchmark problem that has been developed by the Propulsion and Power Systems Panel of The Technical Cooperation Program (TTCP) to help address these needs. The problem is coded in MATLAB (The MathWorks, Inc.) and coupled with a non-linear turbofan engine simulation to produce "snap-shot" measurements, with relevant noise levels, as if collected from a fleet of engines over their lifetime of use. Each engine within the fleet will experience unique operating and deterioration profiles, and may encounter randomly occurring relevant gas path faults including sensor, actuator and component faults. The challenge to the EHM community is to develop gas path diagnostic algorithms to reliably perform fault detection and isolation. An example solution to the benchmark problem is provided along with associated evaluation metrics. A plan is presented to disseminate this benchmark problem to the engine health management technical community and invite technology solutions.

  6. Path segmentation for beginners: an overview of current methods for detecting changes in animal movement patterns.

    PubMed

    Edelhoff, Hendrik; Signer, Johannes; Balkenhol, Niko

    2016-01-01

    Increased availability of high-resolution movement data has led to the development of numerous methods for studying changes in animal movement behavior. Path segmentation methods provide basics for detecting movement changes and the behavioral mechanisms driving them. However, available path segmentation methods differ vastly with respect to underlying statistical assumptions and output produced. Consequently, it is currently difficult for researchers new to path segmentation to gain an overview of the different methods, and choose one that is appropriate for their data and research questions. Here, we provide an overview of different methods for segmenting movement paths according to potential changes in underlying behavior. To structure our overview, we outline three broad types of research questions that are commonly addressed through path segmentation: 1) the quantitative description of movement patterns, 2) the detection of significant change-points, and 3) the identification of underlying processes or 'hidden states'. We discuss advantages and limitations of different approaches for addressing these research questions using path-level movement data, and present general guidelines for choosing methods based on data characteristics and questions. Our overview illustrates the large diversity of available path segmentation approaches, highlights the need for studies that compare the utility of different methods, and identifies opportunities for future developments in path-level data analysis. PMID:27595001

  7. An efficient surrogate-based method for computing rare failure probability

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Jinglai; Xiu, Dongbin

    2011-10-01

    In this paper, we present an efficient numerical method for evaluating rare failure probability. The method is based on a recently developed surrogate-based method from Li and Xiu [J. Li, D. Xiu, Evaluation of failure probability via surrogate models, J. Comput. Phys. 229 (2010) 8966-8980] for failure probability computation. The method by Li and Xiu is of hybrid nature, in the sense that samples of both the surrogate model and the true physical model are used, and its efficiency gain relies on using only very few samples of the true model. Here we extend the capability of the method to rare probability computation by using the idea of importance sampling (IS). In particular, we employ cross-entropy (CE) method, which is an effective method to determine the biasing distribution in IS. We demonstrate that, by combining with the CE method, a surrogate-based IS algorithm can be constructed and is highly efficient for rare failure probability computation—it incurs much reduced simulation efforts compared to the traditional CE-IS method. In many cases, the new method is capable of capturing failure probability as small as 10 -12 ˜ 10 -6 with only several hundreds samples.

  8. Reliability analysis of redundant systems. [a method to compute transition probabilities

    NASA Technical Reports Server (NTRS)

    Yeh, H. Y.

    1974-01-01

    A method is proposed to compute the transition probability (the probability of partial or total failure) of parallel redundant system. The effect of geometry of the system, the direction of load, and the degree of redundancy on the probability of complete survival of parachute-like system are also studied. The results show that the probability of complete survival of three-member parachute-like system is very sensitive to the variation of horizontal angle of the load. However, it becomes very insignificant as the degree of redundancy increases.

  9. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Herzog, James P. (Inventor); Bickford, Randall L. (Inventor)

    2005-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  10. Surveillance System and Method having an Adaptive Sequential Probability Fault Detection Test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2008-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  11. Surveillance system and method having an adaptive sequential probability fault detection test

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor); Herzog, James P. (Inventor)

    2006-01-01

    System and method providing surveillance of an asset such as a process and/or apparatus by providing training and surveillance procedures that numerically fit a probability density function to an observed residual error signal distribution that is correlative to normal asset operation and then utilizes the fitted probability density function in a dynamic statistical hypothesis test for providing improved asset surveillance.

  12. A combined method for determining reaction paths, minima, and transition state geometries

    NASA Astrophysics Data System (ADS)

    Ayala, Philippe Y.; Schlegel, H. Bernhard

    1997-07-01

    Mapping out a reaction mechanism involves optimizing the reactants and products, finding the transition state and following the reaction path connecting them. Transition states can be difficult to locate and reaction paths can be expensive to follow. We describe an efficient algorithm for determining the transition state, minima and reaction path in a single procedure. Starting with an approximate path represented by N points, the path is iteratively relaxed until one of the N points reached the transition state, the end points optimize to minima and the remaining points converged to a second order approximation of the steepest descent path. The method appears to be more reliable than conventional transition state optimization algorithms, and requires only energies and gradients, but not second derivative calculations. The procedure is illustrated by application to a number of model reactions. In most cases, the reaction mechanism can be described well using 5 to 7 points to represent the transition state, the minima and the path. The computational cost of relaxing the path is less than or comparable to the cost of standard techniques for finding the transition state and the minima, determining the transition vector and following the reaction path on both sides of the transition state.

  13. A new method to calculate reaction paths for conformation transitions of large molecules

    NASA Astrophysics Data System (ADS)

    Smart, Oliver S.

    1994-05-01

    Path energy minimization (PEM), a novel method for the generation of a reaction path linking two known conformers of a molecule, is presented. The technique is based on optimizing a function which closely approximates the peak potential energy of a quasi-continuous path between the fixed end points. A transition involving the change in the pucker angle of α-D-xylulofuranose is used as a test case. The method is shown to, be capable of identifying transition state structures and energy barries. The utility of the method is demonstrated by an application to substantial conformational transition of the ion-channel forming polypeptide gramicidin A.

  14. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  15. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  16. An improved path flux analysis with multi generations method for mechanism reduction

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Gou, Xiaolong

    2016-03-01

    An improved path flux analysis with a multi generations (IMPFA) method is proposed to eliminate unimportant species and reactions, and to generate skeletal mechanisms. The production and consumption path fluxes of each species at multiple reaction paths are calculated and analysed to identify the importance of the species and of the elementary reactions. On the basis of the indexes of each reaction path of the first, second, and third generations, the improved path flux analysis with two generations (IMPFA2) and improved path flux analysis with three generations (IMPFA3) are used to generate skeletal mechanisms that contain different numbers of species. The skeletal mechanisms are validated in the case of homogeneous autoignition and perfectly stirred reactor of methane and n-decane/air mixtures. Simulation results of the skeletal mechanisms generated by IMPFA2 and IMPFA3 are compared with those obtained by path flux analysis (PFA) with two and three generations, respectively. The comparisons of ignition delay times, final temperatures, and temperature dependence on flow residence time show that the skeletal mechanisms generated by the present IMPFA method are more accurate than those obtained by the PFA method, with almost the same number of species under a range of initial conditions. By considering the accuracy and computational efficiency, when using the IMPFA (or PFA) method, three generations may be the best choice for the reduction of large-scale detailed chemistry.

  17. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  18. Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.

    PubMed

    Jaspersen, Johannes G; Montibeller, Gilberto

    2015-07-01

    Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. PMID:25850859

  19. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  20. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of

  1. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  2. Computer code long path method for long path differential-absorption predictions using CO{sub 2} laser lines

    SciTech Connect

    Zuev, V.V.; Mitsel`, A.A.; Kataev, M.Y.; Ptashnik, I.V.; Firsov, K.M.

    1995-11-01

    A computer program LPM (Long Path Method) has been developed for imitative modeling of the concentration at gases (H{sub 2}O, CO{sub 2}, O{sub 3}, NH{sub 3}, C{sub 2}H{sub 4}) in the atmosphere using a long-path double-wavelength laser system equipped with two tunable CO{sub 2} lasers. The model is designed for four different lasing isotopes of CO{sub 2} ({sup 12}C{sup 16}O{sub 2}, {sup 13}C{sup 16}O{sub 2}, {sup 12}C{sup 18}O{sub 2}, {sup 13}C{sup 18}O{sub 2}). The program determines optimal pairs of CO{sub 2} laser wavelengths, and the gas concentration retrieval errors from sounding data caused both by detector noise and systematic inaccuracy. The program was written in MS FORTRAN and Visual Basic languages for Windows 3.1 and an IBM-compatible PC. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  3. Path method for reconstructing images in fluorescence optical tomography

    SciTech Connect

    Kravtsenyuk, Olga V; Lyubimov, Vladimir V; Kalintseva, Natalie A

    2006-11-30

    A reconstruction method elaborated for the optical diffusion tomography of the internal structure of objects containing absorbing and scattering inhomogeneities is considered. The method is developed for studying objects with fluorescing inhomogeneities and can be used for imaging of distributions of artificial fluorophores whose aggregations indicate the presence of various diseases or pathological deviations. (special issue devoted to multiple radiation scattering in random media)

  4. A novel approach for multiple mobile objects path planning: Parametrization method and conflict resolution strategy

    NASA Astrophysics Data System (ADS)

    Ma, Yong; Wang, Hongwei; Zamirian, M.

    2012-01-01

    We present a new approach containing two steps to determine conflict-free paths for mobile objects in two and three dimensions with moving obstacles. Firstly, the shortest path of each object is set as goal function which is subject to collision-avoidance criterion, path smoothness, and velocity and acceleration constraints. This problem is formulated as calculus of variation problem (CVP). Using parametrization method, CVP is converted to time-varying nonlinear programming problems (TNLPP) and then resolved. Secondly, move sequence of object is assigned by priority scheme; conflicts are resolved by multilevel conflict resolution strategy. Approach efficiency is confirmed by numerical examples.

  5. Approximation of probability density functions by the Multilevel Monte Carlo Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Bierig, Claudio; Chernov, Alexey

    2016-06-01

    We develop a complete convergence theory for the Maximum Entropy method based on moment matching for a sequence of approximate statistical moments estimated by the Multilevel Monte Carlo method. Under appropriate regularity assumptions on the target probability density function, the proposed method is superior to the Maximum Entropy method with moments estimated by the Monte Carlo method. New theoretical results are illustrated in numerical examples.

  6. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the

  7. A deformable model-based minimal path segmentation method for kidney MR images

    NASA Astrophysics Data System (ADS)

    Li, Ke; Fei, Baowei

    2008-03-01

    We developed a new minimal path segmentation method for mouse kidney MR images. We used dynamic programming and a minimal path segmentation approach to detect the optimal path within a weighted graph between two end points. The energy function combines distance and gradient information to guide the marching curve and thus to evaluate the best path and to span a broken edge. An algorithm was developed to automatically place initial end points. Dynamic programming was used to automatically optimize and update end points during the searching procedure. Principle component analysis (PCA) was used to generate a deformable model, which serves as the prior knowledge for the selection of initial end points and for the evaluation of the best path. The method has been tested for kidney MR images acquired from 44 mice. To quantitatively assess the automatic segmentation method, we compared the results with manual segmentation. The mean and standard deviation of the overlap ratios are 95.19%+/-0.03%. The distance error between the automatic and manual segmentation is 0.82+/-0.41 pixel. The automatic minimal path segmentation method is fast, accurate, and robust and it can be applied not only for kidney images but also for other organs.

  8. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems.

    PubMed

    Branduardi, Davide; Faraldo-Gómez, José D

    2013-09-10

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string. PMID

  9. Application of Simultaneous Equations Method to ANC System with Non-minimum Phase Secondary Path

    NASA Astrophysics Data System (ADS)

    Fujii, Kensaku; Kashihara, Kenji; Wakabayashi, Isao; Muneyasu, Mitsuji; Morimoto, Masakazu

    In this paper, we propose a method capable of shortening the distance from a noise detection microphone to a loudspeaker in active noise control system with non-minimum phase secondary path. The distance can be basically shortened by forming the noise control filter, which produces the secondary noise provided by the loudspeaker, with the cascade connection of a non-recursive filter and a recursive filter. The output of the recursive filter, however, diverges even when the secondary path includes only a minimum phase component. In this paper, we prevent the divergence by utilizing MINT (multi-input/output inverse theorem) method increasing the number of secondary paths than that of primary paths. MINT method, however, requires a large scale inverse matrix operation, which increases the processing cost. We hence propose a method reducing the processing cost. Actually, MINT method has only to be applied to the non-minimum phase components of the secondary paths. We hence extract the non-minimum phase components and then apply MINT method only to those. The order of the inverse matrix thereby decreases and the processing cost can be reduced. We finally show a simulation result demonstrating that the proposed method successfully works.

  10. Selective flow path alpha particle detector and method of use

    DOEpatents

    Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore

    2002-01-01

    A method and apparatus for monitoring alpha contamination are provided in which ions generated in the air surrounding the item, by the passage of alpha particles, are moved to a distant detector location. The parts of the item from which ions are withdrawn can be controlled by restricting the air flow over different portions of the apparatus. In this way, detection of internal and external surfaces separately, for instance, can be provided. The apparatus and method are particularly suited for use in undertaking alpha contamination measurements during the commissioning operations.

  11. Free vibration characteristics of multiple load path blades by the transfer matrix method

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.; Joshi, Arun M.

    1986-01-01

    The determination of free vibrational characteristics is basic to any dynamic design, and these characteristics can form the basis for aeroelastic stability analyses. Conventional helicopter blades are typically idealized as single-load-path blades, and the transfer matrix method is well suited to analyze such blades. Several current helicopter dynamic programs employ transfer matrices to analyze the rotor blades. In this paper, however, the transfer matrix method is extended to treat multiple-load-path blades, without resorting to an equivalent single-load-path approximation. With such an extension, these current rotor dynamic programs which employ the transfer matrix method can be modified with relative ease to account for the multiple load paths. Unlike the conventional blades, the multiple-load-path blades require the introduction of the axial degree-of-freedom into the solution process to account for the differential axial displacements of the different load paths. The transfer matrix formulation is validated through comparison with the finite-element solutions.

  12. Constrained path Monte Carlo method for fermion ground states

    SciTech Connect

    Zhang, S. |; Carlson, J.; Gubernatis, J.E.

    1997-03-01

    We describe and discuss a recently proposed quantum Monte Carlo algorithm to compute the ground-state properties of various systems of interacting fermions. In this method, the ground state is projected from an initial wave function by a branching random walk in an overcomplete basis of Slater determinants. By constraining the determinants according to a trial wave function {vert_bar}{psi}{sub T}{r_angle}, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if {vert_bar}{psi}{sub T}{r_angle} is exact. We illustrate the method by describing in detail its implementation for the two-dimensional one-band Hubbard model. We show results for lattice sizes up to 16{times}16 and for various electron fillings and interaction strengths. With simple single-determinant wave functions as {vert_bar}{psi}{sub T}{r_angle}, the method yields accurate (often to within a few percent) estimates of the ground-state energy as well as correlation functions, such as those for electron pairing. We conclude by discussing possible extensions of the algorithm. {copyright} {ital 1997} {ital The American Physical Society}

  13. Constrained path Monte Carlo method for fermion ground states

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwei; Carlson, J.; Gubernatis, J. E.

    1997-03-01

    We describe and discuss a recently proposed quantum Monte Carlo algorithm to compute the ground-state properties of various systems of interacting fermions. In this method, the ground state is projected from an initial wave function by a branching random walk in an overcomplete basis of Slater determinants. By constraining the determinants according to a trial wave function \\|ψT>, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if \\|ψT> is exact. We illustrate the method by describing in detail its implementation for the two-dimensional one-band Hubbard model. We show results for lattice sizes up to 16×16 and for various electron fillings and interaction strengths. With simple single-determinant wave functions as \\|ψT>, the method yields accurate (often to within a few percent) estimates of the ground-state energy as well as correlation functions, such as those for electron pairing. We conclude by discussing possible extensions of the algorithm.

  14. Probability computations using the SIGMA-PI method on a personal computer

    SciTech Connect

    Haskin, F.E.; Lazo, M.S.; Heger, A.S.

    1990-09-30

    The SIGMA-PI ({Sigma}{Pi}) method as implemented in the SIGPI computer code, is designed to accurately and efficiently evaluate the probability of Boolean expressions in disjunctive normal form given the base event probabilities. The method is not limited to problems in which base event probabilities are small, nor to Boolean expressions that exclude the compliments of base events, nor to problems in which base events are independent. The feasibility of implementing the {Sigma}{Pi} method on a personal computer has been evaluated, and a version of the SIGPI code capable of quantifying simple Boolean expressions with independent base events on the personal computer has been developed. Tasks required for a fully functional personal computer version of SIGPI have been identified together with enhancements that could be implemented to improve the utility and efficiency of the code.

  15. Constrained Path Quantum Monte Carlo Method for Fermion Ground States

    NASA Astrophysics Data System (ADS)

    Zhang, Shiwei; Carlson, J.; Gubernatis, J. E.

    1995-05-01

    We propose a new quantum Monte Carlo algorithm to compute fermion ground-state properties. The ground state is projected from an initial wave function by a branching random walk in an over-complete basis space of Slater determinants. By constraining the determinants according to a trial wave function \\|ΨT>, we remove the exponential decay of signal-to-noise ratio characteristic of the sign problem. The method is variational and is exact if \\|ΨT> is exact. We report results on the two-dimensional Hubbard model up to size 16×16, for various electron fillings and interaction strengths.

  16. Path Planning Method for UUV Homing and Docking in Movement Disorders Environment

    PubMed Central

    Yan, Zheping; Deng, Chao; Chi, Dongnan; Hou, Shuping

    2014-01-01

    Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path. PMID:25054169

  17. Method and apparatus for monitoring characteristics of a flow path having solid components flowing therethrough

    DOEpatents

    Hoskinson, Reed L.; Svoboda, John M.; Bauer, William F.; Elias, Gracy

    2008-05-06

    A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.

  18. Path planning method for UUV homing and docking in movement disorders environment.

    PubMed

    Yan, Zheping; Deng, Chao; Chi, Dongnan; Chen, Tao; Hou, Shuping

    2014-01-01

    Path planning method for unmanned underwater vehicles (UUV) homing and docking in movement disorders environment is proposed in this paper. Firstly, cost function is proposed for path planning. Then, a novel particle swarm optimization (NPSO) is proposed and applied to find the waypoint with minimum value of cost function. Then, a strategy for UUV enters into the mother vessel with a fixed angle being proposed. Finally, the test function is introduced to analyze the performance of NPSO and compare with basic particle swarm optimization (BPSO), inertia weight particle swarm optimization (LWPSO, EPSO), and time-varying acceleration coefficient (TVAC). It has turned out that, for unimodal functions, NPSO performed better searching accuracy and stability than other algorithms, and, for multimodal functions, the performance of NPSO is similar to TVAC. Then, the simulation of UUV path planning is presented, and it showed that, with the strategy proposed in this paper, UUV can dodge obstacles and threats, and search for the efficiency path. PMID:25054169

  19. A method of classification for multisource data in remote sensing based on interval-valued probabilities

    NASA Technical Reports Server (NTRS)

    Kim, Hakil; Swain, Philip H.

    1990-01-01

    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.

  20. A simple method for afterpulse probability measurement in high-speed single-photon detectors

    NASA Astrophysics Data System (ADS)

    Liu, Junliang; Li, Yongfu; Ding, Lei; Zhang, Chunfang; Fang, Jiaxiong

    2016-07-01

    A simple statistical method is proposed for afterpulse probability measurement in high-speed single-photon detectors. The method is based on in-laser-period counting without the support of time-correlated information or delay adjustment, and is readily implemented with commercially available logic devices. We present comparisons among the proposed method and commonly used methods which use the time-correlated single-photon counter or the gated counter, based on a 1.25-GHz gated infrared single-photon detector. Results show that this in-laser-period counting method has similar accuracy to the commonly used methods with extra simplicity, robustness, and faster measuring speed.

  1. A fast tomographic method for searching the minimum free energy path

    SciTech Connect

    Chen, Changjun; Huang, Yanzhao; Xiao, Yi; Jiang, Xuewei

    2014-10-21

    Minimum Free Energy Path (MFEP) provides a lot of important information about the chemical reactions, like the free energy barrier, the location of the transition state, and the relative stability between reactant and product. With MFEP, one can study the mechanisms of the reaction in an efficient way. Due to a large number of degrees of freedom, searching the MFEP is a very time-consuming process. Here, we present a fast tomographic method to perform the search. Our approach first calculates the free energy surfaces in a sequence of hyperplanes perpendicular to a transition path. Based on an objective function and the free energy gradient, the transition path is optimized in the collective variable space iteratively. Applications of the present method to model systems show that our method is practical. It can be an alternative approach for finding the state-to-state MFEP.

  2. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van't

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  3. The "Closed School-Cluster" Method of Selecting a Probability Sample.

    ERIC Educational Resources Information Center

    Shaycoft, Marion F.

    In some educational research studies--particularly longitudinal studies requiring a probability sample of schools and spanning a wide range of grades--it is desirable to so select the sample that schools at different levels (e.g., elementary and secondary) "correspond." This has often proved unachievable, using standard methods of selecting school…

  4. MODIFIED AGAR MEDIUM FOR DETECTING ENVIRONMENTAL SALMONELLAE BY THE MOST-PROBABLE-NUMBER METHOD

    EPA Science Inventory

    Salmonellae in the environment remain a potential source of disease. Low numbers of salmonellae have been detected and enumerated from environmental samples by most probable number methods that require careful colony selection from plated agar medium. A modified xylose lysine bri...

  5. Measures of activity-based pedestrian exposure to the risk of vehicle-pedestrian collisions: space-time path vs. potential path tree methods.

    PubMed

    Yao, Shenjun; Loo, Becky P Y; Lam, Winnie W Y

    2015-02-01

    Research on the extent to which pedestrians are exposed to road collision risk is important to the improvement of pedestrian safety. As precise geographical information is often difficult and costly to collect, this study proposes a potential path tree method derived from time geography concepts in measuring pedestrian exposure. With negative binomial regression (NBR) and geographically weighted Poisson regression (GWPR) models, the proposed probabilistic two-anchor-point potential path tree (PPT) approach (including the equal and weighted PPT methods) are compared with the deterministic space-time path (STP) method. The results indicate that both STP and PPT methods are useful tools in measuring pedestrian exposure. While the STP method can save much time, the PPT methods outperform the STP method in explaining the underlying vehicle-pedestrian collision pattern. Further research efforts are needed to investigate the influence of walking speed and route choice. PMID:25555021

  6. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium. PMID:25607163

  7. Partly melted DNA conformations obtained with a probability peak finding method

    NASA Astrophysics Data System (ADS)

    Tøstesen, Eivind

    2005-06-01

    Peaks in the probabilities of loops or bubbles, helical segments, and unzipping ends in melting DNA are found in this article using a peak finding method that maps the hierarchical structure of certain energy landscapes. The peaks indicate the alternative conformations that coexist in equilibrium and the range of their fluctuations. This yields a representation of the conformational ensemble at a given temperature, which is illustrated in a single diagram called a stitch profile. This article describes the methodology and discusses stitch profiles vs the ordinary probability profiles using the phage lambda genome as an example.

  8. Code System to Calculate Group-Averaged Cross Sections Using the Collision Probability Method.

    Energy Science and Technology Software Center (ESTSC)

    1995-05-17

    Version 00 This program calculates group-averaged cross sections for specific zones in a one-dimensional geometry. ROLAIDS-CPM is an extension of ROLAIDS from the PSR-315/AMPX-77 package. The main extension is the capability to use the collision probability method for a slab- or cylinder-geometry rather than the interface-currents method. This new version allows slowing down of neutrons in the energy range where the scattering is elastic and upscattering does not occur. The scattering sources are assumed tomore » be flat and isotropic in the different zones. The extra assumption of cosine currents at the interfaces of the zones (interface currents method) is not necessary for the collision probability method.« less

  9. Code System to Calculate Group-Averaged Cross Sections Using the Collision Probability Method.

    SciTech Connect

    KRUIJF, W. D.

    1995-05-17

    Version 00 This program calculates group-averaged cross sections for specific zones in a one-dimensional geometry. ROLAIDS-CPM is an extension of ROLAIDS from the PSR-315/AMPX-77 package. The main extension is the capability to use the collision probability method for a slab- or cylinder-geometry rather than the interface-currents method. This new version allows slowing down of neutrons in the energy range where the scattering is elastic and upscattering does not occur. The scattering sources are assumed to be flat and isotropic in the different zones. The extra assumption of cosine currents at the interfaces of the zones (interface currents method) is not necessary for the collision probability method.

  10. a Probability-Based Statistical Method to Extract Water Body of TM Images with Missing Information

    NASA Astrophysics Data System (ADS)

    Lian, Shizhong; Chen, Jiangping; Luo, Minghai

    2016-06-01

    Water information cannot be accurately extracted using TM images because true information is lost in some images because of blocking clouds and missing data stripes, thereby water information cannot be accurately extracted. Water is continuously distributed in natural conditions; thus, this paper proposed a new method of water body extraction based on probability statistics to improve the accuracy of water information extraction of TM images with missing information. Different disturbing information of clouds and missing data stripes are simulated. Water information is extracted using global histogram matching, local histogram matching, and the probability-based statistical method in the simulated images. Experiments show that smaller Areal Error and higher Boundary Recall can be obtained using this method compared with the conventional methods.

  11. Lipid extraction methods from microalgal biomass harvested by two different paths: screening studies toward biodiesel production.

    PubMed

    Ríos, Sergio D; Castañeda, Joandiet; Torras, Carles; Farriol, Xavier; Salvadó, Joan

    2013-04-01

    Microalgae can grow rapidly and capture CO2 from the atmosphere to convert it into complex organic molecules such as lipids (biodiesel feedstock). High scale economically feasible microalgae based oil depends on optimizing the entire process production. This process can be divided in three very different but directly related steps (production, concentration, lipid extraction and transesterification). The aim of this study is to identify the best method of lipid extraction to undergo the potentiality of some microalgal biomass obtained from two different harvesting paths. The first path used all physicals concentration steps, and the second path was a combination of chemical and physical concentration steps. Three microalgae species were tested: Phaeodactylum tricornutum, Nannochloropsis gaditana, and Chaetoceros calcitrans One step lipid extraction-transesterification reached the same fatty acid methyl ester yield as the Bligh and Dyer and soxhlet extraction with n-hexane methods with the corresponding time, cost and solvent saving. PMID:23434816

  12. A parallel multiple path tracing method based on OptiX for infrared image generation

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu

    2015-12-01

    Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.

  13. Probability method for Cerenkov luminescence tomography based on conformance error minimization

    PubMed Central

    Ding, Xintao; Wang, Kun; Jie, Biao; Luo, Yonglong; Hu, Zhenhua; Tian, Jie

    2014-01-01

    Cerenkov luminescence tomography (CLT) was developed to reconstruct a three-dimensional (3D) distribution of radioactive probes inside a living animal. Reconstruction methods are generally performed within a unique framework by searching for the optimum solution. However, the ill-posed aspect of the inverse problem usually results in the reconstruction being non-robust. In addition, the reconstructed result may not match reality since the difference between the highest and lowest uptakes of the resulting radiotracers may be considerably large, therefore the biological significance is lost. In this paper, based on the minimization of a conformance error, a probability method is proposed that consists of qualitative and quantitative modules. The proposed method first pinpoints the organ that contains the light source. Next, we developed a 0-1 linear optimization subject to a space constraint to model the CLT inverse problem, which was transformed into a forward problem by employing a region growing method to solve the optimization. After running through all of the elements used to grow the sources, a source sequence was obtained. Finally, the probability of each discrete node being the light source inside the organ was reconstructed. One numerical study and two in vivo experiments were conducted to verify the performance of the proposed algorithm, and comparisons were carried out using the hp-finite element method (hp-FEM). The results suggested that our proposed probability method was more robust and reasonable than hp-FEM. PMID:25071951

  14. Probability method for Cerenkov luminescence tomography based on conformance error minimization.

    PubMed

    Ding, Xintao; Wang, Kun; Jie, Biao; Luo, Yonglong; Hu, Zhenhua; Tian, Jie

    2014-07-01

    Cerenkov luminescence tomography (CLT) was developed to reconstruct a three-dimensional (3D) distribution of radioactive probes inside a living animal. Reconstruction methods are generally performed within a unique framework by searching for the optimum solution. However, the ill-posed aspect of the inverse problem usually results in the reconstruction being non-robust. In addition, the reconstructed result may not match reality since the difference between the highest and lowest uptakes of the resulting radiotracers may be considerably large, therefore the biological significance is lost. In this paper, based on the minimization of a conformance error, a probability method is proposed that consists of qualitative and quantitative modules. The proposed method first pinpoints the organ that contains the light source. Next, we developed a 0-1 linear optimization subject to a space constraint to model the CLT inverse problem, which was transformed into a forward problem by employing a region growing method to solve the optimization. After running through all of the elements used to grow the sources, a source sequence was obtained. Finally, the probability of each discrete node being the light source inside the organ was reconstructed. One numerical study and two in vivo experiments were conducted to verify the performance of the proposed algorithm, and comparisons were carried out using the hp-finite element method (hp-FEM). The results suggested that our proposed probability method was more robust and reasonable than hp-FEM. PMID:25071951

  15. Real-time optical path control method that utilizes multiple support vector machines for traffic prediction

    NASA Astrophysics Data System (ADS)

    Kawase, Hiroshi; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    An effective solution to the continuous Internet traffic expansion is to offload traffic to lower layers such as the L2 or L1 optical layers. One possible approach is to introduce dynamic optical path operations such as adaptive establishment/tear down according to traffic variation. Path operations cannot be done instantaneously; hence, traffic prediction is essential. Conventional prediction techniques need optimal parameter values to be determined in advance by averaging long-term variations from the past. However, this does not allow adaptation to the ever-changing short-term variations expected to be common in future networks. In this paper, we propose a real-time optical path control method based on a machinelearning technique involving support vector machines (SVMs). A SVM learns the most recent traffic characteristics, and so enables better adaptation to temporal traffic variations than conventional techniques. The difficulty lies in determining how to minimize the time gap between optical path operation and buffer management at the originating points of those paths. The gap makes the required learning data set enormous and the learning process costly. To resolve the problem, we propose the adoption of multiple SVMs running in parallel, trained with non-overlapping subsets of the original data set. The maximum value of the outputs of these SVMs will be the estimated number of necessary paths. Numerical experiments prove that our proposed method outperforms a conventional prediction method, the autoregressive moving average method with optimal parameter values determined by Akaike's information criterion, and reduces the packet-loss ratio by up to 98%.

  16. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  17. Enumeration of fungi in fruits by the most probable number method.

    PubMed

    Watanabe, Maiko; Tsutsumi, Fumiyuki; Lee, Ken-ichi; Sugita-Konishi, Yoshiko; Kumagai, Susumu; Takatori, Kosuke; Hara-Kudo, Yukiko; Konuma, Hirotaka

    2010-01-01

    In this study, enumeration methods for fungi in foods were evaluated using fruits that are often contaminated by fungi in the field and rot because of fungal contaminants. As the test methods, we used the standard most probable number (MPN) method with liquid medium in test tubes, which is traditionally used as the enumeration method for bacteria, and the plate-MPN method with agar plate media, in addition to the surface plating method as the traditional enumeration method for fungi. We tested 27 samples of 9 commercial domestic fruits using their surface skin. The results indicated that the standard MPN method showed slow recovery of fungi in test tubes and lower counts than the surface plating method and the plate-MPN method in almost all samples. The fungal count on the 4th d of incubation was approximately the same as on the 10th d by the surface plating method or the plate-MPN method, indicating no significant differences between the fungal counts by these 2 methods. This result indicated that the plate-MPN method had a number agreement with the traditional enumeration method. Moreover, the plate-MPN method has a little laborious without counting colonies, because fungal counts are estimated based on the number of plates with growing colonies. These advantages demonstrated that the plate-MPN method is a comparatively superior and rapid method for enumeration of fungi. PMID:21535611

  18. Correcting errors in the optical path difference in Fourier spectroscopy: a new accurate method.

    PubMed

    Kauppinen, J; Kärkköinen, T; Kyrö, E

    1978-05-15

    A new computational method for calculating and correcting the errors of the optical path difference in Fourier spectrometers is presented. This method only requires an one-sided interferogram and a single well-separated line in the spectrum. The method also cancels out the linear phase error. The practical theory of the method is included, and an example of the progress of the method is illustrated by simulations. The method is also verified by several simulations in order to estimate its usefulness and accuracy. An example of the use of this method in practice is also given. PMID:20198027

  19. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  20. Multilevel Monte Carlo methods for computing failure probability of porous media flow systems

    NASA Astrophysics Data System (ADS)

    Fagerlund, F.; Hellman, F.; Målqvist, A.; Niemi, A.

    2016-08-01

    We study improvements of the standard and multilevel Monte Carlo method for point evaluation of the cumulative distribution function (failure probability) applied to porous media two-phase flow simulations with uncertain permeability. To illustrate the methods, we study an injection scenario where we consider sweep efficiency of the injected phase as quantity of interest and seek the probability that this quantity of interest is smaller than a critical value. In the sampling procedure, we use computable error bounds on the sweep efficiency functional to identify small subsets of realizations to solve highest accuracy by means of what we call selective refinement. We quantify the performance gains possible by using selective refinement in combination with both the standard and multilevel Monte Carlo method. We also identify issues in the process of practical implementation of the methods. We conclude that significant savings in computational cost are possible for failure probability estimation in a realistic setting using the selective refinement technique, both in combination with standard and multilevel Monte Carlo.

  1. Identification of contaminant point source in surface waters based on backward location probability density function method

    NASA Astrophysics Data System (ADS)

    Cheng, Wei Ping; Jia, Yafei

    2010-04-01

    A backward location probability density function (BL-PDF) method capable of identifying location of point sources in surface waters is presented in this paper. The relation of forward location probability density function (FL-PDF) and backward location probability density, based on adjoint analysis, is validated using depth-averaged free-surface flow and mass transport models and several surface water test cases. The solutions of the backward location PDF transport equation agreed well to the forward location PDF computed using the pollutant concentration at the monitoring points. Using this relation and the distribution of the concentration detected at the monitoring points, an effective point source identification method is established. The numerical error of the backward location PDF simulation is found to be sensitive to the irregularity of the computational meshes, diffusivity, and velocity gradients. The performance of identification method is evaluated regarding the random error and number of observed values. In addition to hypothetical cases, a real case was studied to identify the source location where a dye tracer was instantaneously injected into a stream. The study indicated the proposed source identification method is effective, robust, and quite efficient in surface waters; the number of advection-diffusion equations needed to solve is equal to the number of observations.

  2. Path ANalysis

    SciTech Connect

    Snell, Mark K.

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes during courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.

  3. Path ANalysis

    Energy Science and Technology Software Center (ESTSC)

    2007-07-14

    The PANL software determines path through an Adversary Sequence Diagram (ASD) with minimum Probability of Interruption, P(I), given the ASD information and data about site detection, delay, and response force times. To accomplish this, the software generates each path through the ASD, then applies the Estimate of Adversary Sequence Interruption (EASI) methodology for calculating P(I) to each path, and keeps track of the path with the lowest P(I). Primary use is for training purposes duringmore » courses on physical security design. During such courses PANL will be used to demonstrate to students how more complex software codes are used by the US Department of Energy to determine the most-vulnerable paths and, where security needs improvement, how such codes can help determine physical security upgrades.« less

  4. The Path Resistance Method for Bounding the Smallest Nontrivial Eigenvalue of a Laplacian

    NASA Technical Reports Server (NTRS)

    Guattery, Stephen; Leighton, Tom; Miller, Gary L.

    1997-01-01

    We introduce the path resistance method for lower bounds on the smallest nontrivial eigenvalue of the Laplacian matrix of a graph. The method is based on viewing the graph in terms of electrical circuits; it uses clique embeddings to produce lower bounds on lambda(sub 2) and star embeddings to produce lower bounds on the smallest Rayleigh quotient when there is a zero Dirichlet boundary condition. The method assigns priorities to the paths in the embedding; we show that, for an unweighted tree T, using uniform priorities for a clique embedding produces a lower bound on lambda(sub 2) that is off by at most an 0(log diameter(T)) factor. We show that the best bounds this method can produce for clique embeddings are the same as for a related method that uses clique embeddings and edge lengths to produce bounds.

  5. Tableau method for reactive path planning in an obstacle avoidance system

    NASA Astrophysics Data System (ADS)

    Beeson, Bradley K.; Kurtz, John J.; Bonner, Kevin G.

    1999-07-01

    Autonomous off-road vehicles face the daunting challenge of successfully navigating through terrain in which unmapped obstacles present hazards to safe vehicle operation. These obstacles can be sparsely scattered or densely clustered. The obstacle avoidance (OA) system on-board the autonomous vehicle must be capable of detecting all non-negotiable obstacles and planning paths around them in a sufficient computing interval to permit effective operation of the platform. To date, the reactive path planning function performed by OA systems has been essentially an exhaustive search through a set of preprogrammed swaths (linear trajectories projected through the on-board local obstacle map) to determine the best path for the vehicle to travel toward achieving a goal state. Historically, this function is a large consumer of computational resources in an OA system. A novel reactive path planner is described that minimizes processing time through the use of pre-computed indices into an n over n + 1 tableau structure with the lowest level in the tableau representing the traditional 'histogram' result. The tableau method differs significantly from other reactive planners in three ways: (1) the entire tableau is computed off-line and loaded on system startup, minimizing computational load; (2) the real-time computational load is directly proportional to the number of grid points searched and proportional to the square of the number of paths; and (3) the tableau is independent of grid resolution. Analytical and experimental comparisons of the tableau and histogram methods are presented along with generalization into an autonomous mobility system incorporating multiple feature planes and path cost evaluation.

  6. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  7. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  8. [A path-length correction method on biochemical parameter nondestructive measuring of folium].

    PubMed

    Zhang, Qian-Xuan; Zhang, Guang-Jun; Li, Qing-Bo

    2010-05-01

    Vis/NIR spectroscopy technology is capable of analyzing the content of biochemical parameter in folium rapidly and nondestructively. In the process of spectrum analysis, the variations in path-length between different samples exist, with the random light scattering and leaf thickness perturbations, which influence the precision of quantitative analysis model. In order to resolve this problem, an improved path-length correction method based on Extended Multiplicative Scattering Correction is presented. In this paper, firstly the theory of EMSC algorithm is deduced. EMSC method incorporates both chemical terms and wavelength functions to help realize the efficient separation of path-length and interest concentration. Secondly two experiments were implemented to demonstrate the validity of the method. In Experiment 1, sixteen samples of different thickness but almost the same chlorophyll content were selected, and how the path-length affects the spectrum was compared, after EMSC preprocessing, the variable coefficient of spectrum could approach the repeatability error of spectrometer. In Experiment 2, thirty-two samples of different thickness and chlorophyll content were selected. PLS model established using cross validation was employed to evaluate the efficiency of the presented algorithm. Before the preprocessing, the root mean squared error of prediction is 3.9 SPAD with 5 principal components. After preprocessing, the predicted root mean squared error is 2.2 SPAD with 12 principal components. The results indicate that the improved EMSC preprocessing method could exactly eliminate the spectrum difference caused by the path-length variations between different foliums, enhance the sensitivity of concentration and spectral data, and increase the precision of calibrated model. PMID:20672624

  9. A Bio-Inspired Method for the Constrained Shortest Path Problem

    PubMed Central

    Wang, Hongping; Lu, Xi; Wang, Qing

    2014-01-01

    The constrained shortest path (CSP) problem has been widely used in transportation optimization, crew scheduling, network routing and so on. It is an open issue since it is a NP-hard problem. In this paper, we propose an innovative method which is based on the internal mechanism of the adaptive amoeba algorithm. The proposed method is divided into two parts. In the first part, we employ the original amoeba algorithm to solve the shortest path problem in directed networks. In the second part, we combine the Physarum algorithm with a bio-inspired rule to deal with the CSP. Finally, by comparing the results with other method using an examples in DCLC problem, we demonstrate the accuracy of the proposed method. PMID:24959603

  10. Fast method for the estimation of impact probability of near-Earth objects

    NASA Astrophysics Data System (ADS)

    Vavilov, D.; Medvedev, Y.

    2014-07-01

    We propose a method to estimate the probability of collision of a celestial body with the Earth (or another major planet) at a given time moment t. Let there be a set of observations of a small body. At initial time moment T_0, a nominal orbit is defined by the least squares method. In our method, a unique coordinate system is used. It is supposed that errors of observations are related to errors of coordinates and velocities linearly and the distribution law of observation errors is normal. The unique frame is defined as follows. First of all, we fix an osculating ellipse of the body's orbit at the time moment t. The mean anomaly M in this osculating ellipse is a coordinate of the introduced system. The spatial coordinate ξ is perpendicular to the plane which contains the fixed ellipse. η is a spatial coordinate, too, and our axes satisfy the right-hand rule. The origin of ξ and η corresponds to the given M point on the ellipse. The components of the velocity are the corresponding derivatives of dotξ, dotη, dot{M}. To calculate the probability of collision, we numerically integrate equations of an asteroid's motion taking into account perturbations and calculate a normal matrix N. The probability is determinated as follows: P = {|detN|^{ {1}/{2} }}/{ (2π)^3 } int_Ω e^{ - {1}/{2} x^T N x } dx where x denotes a six-dimensional vector of coordinates and velocities, Ω is the region which is occupied by the Earth, and the superscript T denotes the matrix transpose operation. To take into account a gravitational attraction of the Earth, the radius of the Earth is increased by √{1 + {v_s^2}/{v_{rel}^2} } times, where v_s is the escape velocity and v_{rel} is the small body's velocity relative to the Earth. The 6-dimensional integral is analytically integrated over the velocity components on (-∞,+∞). After that we have the 3×3 matrix N_1. That 6-dimensional integral becomes a 3-dimensional integral. To calculate it quickly we do the following. We introduce

  11. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    SciTech Connect

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5.

  12. Signal optimization, noise reduction, and systematic error compensation methods in long-path DOAS measurements

    NASA Astrophysics Data System (ADS)

    Simeone, Emilio; Donati, Alessandro

    1998-12-01

    The increment of the exploitable optical path represents one of the most important efforts in the differential optical absorption spectroscopy (DOAS) instruments improvement. The methods that allow long path measurements in the UV region are presented and discussed in this paper. These methods have been experimented in the new Italian DOAS instrument - SPOT - developed and manufactured by Kayser Italia. The system was equipped with a tele-controlled optical shuttle on the light source unit, allowing background radiation measurement. Wavelength absolute calibration of spectra by means of a collimated UV beam from a mercury lamp integrated in the telescope has been exploited. Besides, possible thermal effects on the dispersion coefficients of the holographic grating have been automatically compensated by means of a general non-linear fit during the spectral analysis session. Measurements in bistatic configuration have been performed in urban areas at 1300 m and 2200 m in three spectral windows from 245 to 380 nm. Measurements with these features are expected in the other spectral windows on path lengths ranging from about 5 to 10 km in urban areas. The DOAS technique can be used in field for very fast measurements in the 245-275 nm spectral range, on path lengths up to about 2500 m.

  13. A Simple Method for Solving the SVM Regularization Path for Semidefinite Kernels.

    PubMed

    Sentelle, Christopher G; Anagnostopoulos, Georgios C; Georgiopoulos, Michael

    2016-04-01

    The support vector machine (SVM) remains a popular classifier for its excellent generalization performance and applicability of kernel methods; however, it still requires tuning of a regularization parameter, C , to achieve optimal performance. Regularization path-following algorithms efficiently solve the solution at all possible values of the regularization parameter relying on the fact that the SVM solution is piece-wise linear in C . The SVMPath originally introduced by Hastie et al., while representing a significant theoretical contribution, does not work with semidefinite kernels. Ong et al. introduce a method improved SVMPath (ISVMP) algorithm, which addresses the semidefinite kernel; however, Singular Value Decomposition or QR factorizations are required, and a linear programming solver is required to find the next C value at each iteration. We introduce a simple implementation of the path-following algorithm that automatically handles semidefinite kernels without requiring a method to detect singular matrices nor requiring specialized factorizations or an external solver. We provide theoretical results showing how this method resolves issues associated with the semidefinite kernel as well as discuss, in detail, the potential sources of degeneracy and cycling and how cycling is resolved. Moreover, we introduce an initialization method for unequal class sizes based upon artificial variables that work within the context of the existing path-following algorithm and do not require an external solver. Experiments compare performance with the ISVMP algorithm introduced by Ong et al. and show that the proposed method is competitive in terms of training time while also maintaining high accuracy. PMID:26011894

  14. Exploratory and confirmatory factor analyses of probability discounting of different outcomes across different methods of measurement.

    PubMed

    Terrell, Heather K; Derenne, Adam; Weatherly, Jeffrey N

    2014-01-01

    The present studies used exploratory and confirmatory factor analyses to explore the degree to which probability discounting processes are similar to delay discounting processes. To determine whether these processes are similar, 2 questions were addressed: the degree to which probability discounting outcomes can be categorized into multiple domains (as demonstrated for delay discounting) and whether the inverse magnitude effect would be observed for nonmonetary outcomes. An exploratory factor analysis was conducted using data from the fill-in-the-blank method (Study 1), followed by a confirmatory factor analysis using data from a multiple-choice method (Study 2) as a replication. These studies provide support for the idea that outcomes can be subdivided into multiple domains. Generally, the discounting rates were steeper for tangible outcomes than nontangible outcomes, and a magnitude effect was observed that was consistent with, rather than the inverse of, that observed for delay discounting tasks. Complexities related to the relationship between probability discounting processes and delay discounting processing are discussed. PMID:24934012

  15. RATIONAL DETERMINATION METHOD OF PROBABLE FREEZING INDEX FOR n-YEARS CONSIDERING THE REGIONAL CHARACTERISTICS

    NASA Astrophysics Data System (ADS)

    Kawabata, Shinichiro; Hayashi, Keiji; Kameyama, Shuichi

    This paper investigates a method for ob taining the probable freezing index for n -years from past frostaction damage and meteorological data. From investigati on of Japanese cold winter data from the areas of Hokkaido, Tohoku and south of Tohoku, it was found that the extent of cold winter had regularity by location south or north. Also, after obtaining return periods of cold winters by area, obvious regional characteristics were found. Mild winters are rare in Hokkaido. However, it was clarified that when Hokkaido had cold winters, its size increased. It wa s effective to determine the probable freezing indices as 20-, 15- and 10-year return periods for Hokkaido, Tohoku and south of Tohoku, respectively.

  16. [Comparison of correction methods for nonlinear optic path difference of reflecting rotating Fourier transform spectrometer].

    PubMed

    Jing, Juan-Juan; Zhou, Jin-Song; Xiangli, Bin; Lü, Qun-Bo; Wei, Ru-Yi

    2010-06-01

    The principle of reflecting rotating Fourier transform spectrometer was introduced in the present paper. The nonlinear problem of optical path difference (OPD) of rotating Fourier transform spectrometer universally exists, produced by the rotation of rotating mirror. The nonlinear OPD will lead to fictitious recovery spectrum, so it is necessary to compensate the nonlinear OPD. Three methods of correction for the nonlinear OPD were described and compared in this paper, namely NUFFT method, OPD replace method and interferograms fitting method. The result indicates that NUFFT was the best method for the compensation of nonlinear OPD, OPD replace method was better, its precision was almost the same as NUFFT method, and their relative error are superior to 0.13%, but the computation efficiency of OPD replace method is slower than NUFFT method, while the precision and computation efficiency of interferograms fitting method are not so satisfied, because the interferograms are rapid fluctuant especially around the zero optical path difference, so it is unsuitable for polynomial fitting, and because this method needs piecewise fitting, its computation efficiency is the slowest, thus the NUFFT method is the most suited method for the nonlinear OPD compensation of reflecting rotating Fourier transform spectrometer. PMID:20707175

  17. Computing energy spectra for quantum systems using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Rejcek, J. M.; Fazleev, N. G.

    2009-10-01

    We use group theory considerations and properties of a continuous path to define a failure tree numerical procedure for calculating the lowest energy eigenvalues for quantum systems using the Feynman-Kac path integral method. Within this method the solution of the imaginary time Schr"odinger equation is approximated by random walk simulations on a discrete grid constrained only by symmetry considerations of the Hamiltonian. The required symmetry constraints on random walk simulations are associated with a given irreducible representation and are found by identifying the eigenvalues for the irreducible representations corresponding to the symmetric or antisymmetric eigenfunctions for each group operator. The numerical method is applied to compute the eigenvalues of the ground and excited states of the hydrogen and helium atoms.

  18. A novel algorithm for solving optimal path planning problems based on parametrization method and fuzzy aggregation

    NASA Astrophysics Data System (ADS)

    Zamirian, M.; Kamyad, A. V.; Farahi, M. H.

    2009-09-01

    In this Letter a new approach for solving optimal path planning problems for a single rigid and free moving object in a two and three dimensional space in the presence of stationary or moving obstacles is presented. In this approach the path planning problems have some incompatible objectives such as the length of path that must be minimized, the distance between the path and obstacles that must be maximized and etc., then a multi-objective dynamic optimization problem (MODOP) is achieved. Considering the imprecise nature of decision maker's (DM) judgment, these multiple objectives are viewed as fuzzy variables. By determining intervals for the values of these fuzzy variables, flexible monotonic decreasing or increasing membership functions are determined as the degrees of satisfaction of these fuzzy variables on their intervals. Then, the optimal path planning policy is searched by maximizing the aggregated fuzzy decision values, resulting in a fuzzy multi-objective dynamic optimization problem (FMODOP). Using a suitable t-norm, the FMODOP is converted into a non-linear dynamic optimization problem (NLDOP). By using parametrization method and some calculations, the NLDOP is converted into the sequence of conventional non-linear programming problems (NLPP). It is proved that the solution of this sequence of the NLPPs tends to a Pareto optimal solution which, among other Pareto optimal solutions, has the best satisfaction of DM for the MODOP. Finally, the above procedure as a novel algorithm integrating parametrization method and fuzzy aggregation to solve the MODOP is proposed. Efficiency of our approach is confirmed by some numerical examples.

  19. Accelerated path integral methods for atomistic simulations at ultra-low temperatures

    NASA Astrophysics Data System (ADS)

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5+. We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state.

  20. Accelerated path integral methods for atomistic simulations at ultra-low temperatures.

    PubMed

    Uhl, Felix; Marx, Dominik; Ceriotti, Michele

    2016-08-01

    Path integral methods provide a rigorous and systematically convergent framework to include the quantum mechanical nature of atomic nuclei in the evaluation of the equilibrium properties of molecules, liquids, or solids at finite temperature. Such nuclear quantum effects are often significant for light nuclei already at room temperature, but become crucial at cryogenic temperatures such as those provided by superfluid helium as a solvent. Unfortunately, the cost of converged path integral simulations increases significantly upon lowering the temperature so that the computational burden of simulating matter at the typical superfluid helium temperatures becomes prohibitive. Here we investigate how accelerated path integral techniques based on colored noise generalized Langevin equations, in particular the so-called path integral generalized Langevin equation thermostat (PIGLET) variant, perform in this extreme quantum regime using as an example the quasi-rigid methane molecule and its highly fluxional protonated cousin, CH5 (+). We show that the PIGLET technique gives a speedup of two orders of magnitude in the evaluation of structural observables and quantum kinetic energy at ultralow temperatures. Moreover, we computed the spatial spread of the quantum nuclei in CH4 to illustrate the limits of using such colored noise thermostats close to the many body quantum ground state. PMID:27497533

  1. Probability of identification: a statistical model for the validation of qualitative botanical identification methods.

    PubMed

    LaBudde, Robert A; Harnly, James M

    2012-01-01

    A qualitative botanical identification method (BIM) is an analytical procedure that returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) material, or whether it contains excessive nontarget (undesirable) material. The report describes the development and validation of studies for a BIM based on the proportion of replicates identified, or probability of identification (POI), as the basic observed statistic. The statistical procedures proposed for data analysis follow closely those of the probability of detection, and harmonize the statistical concepts and parameters between quantitative and qualitative method validation. Use of POI statistics also harmonizes statistical concepts for botanical, microbiological, toxin, and other analyte identification methods that produce binary results. The POI statistical model provides a tool for graphical representation of response curves for qualitative methods, reporting of descriptive statistics, and application of performance requirements. Single collaborator and multicollaborative study examples are given. PMID:22468371

  2. Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation

    NASA Astrophysics Data System (ADS)

    Birkholz, Adam B.; Schlegel, H. Bernhard

    2016-05-01

    Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained.

  3. Path optimization by a variational reaction coordinate method. II. Improved computational efficiency through internal coordinates and surface interpolation.

    PubMed

    Birkholz, Adam B; Schlegel, H Bernhard

    2016-05-14

    Reaction path optimization is being used more frequently as an alternative to the standard practice of locating a transition state and following the path downhill. The Variational Reaction Coordinate (VRC) method was proposed as an alternative to chain-of-states methods like nudged elastic band and string method. The VRC method represents the path using a linear expansion of continuous basis functions, allowing the path to be optimized variationally by updating the expansion coefficients to minimize the line integral of the potential energy gradient norm, referred to as the Variational Reaction Energy (VRE) of the path. When constraints are used to control the spacing of basis functions and to couple the minimization of the VRE with the optimization of one or more individual points along the path (representing transition states and intermediates), an approximate path as well as the converged geometries of transition states and intermediates along the path are determined in only a few iterations. This algorithmic efficiency comes at a high per-iteration cost due to numerical integration of the VRE derivatives. In the present work, methods for incorporating redundant internal coordinates and potential energy surface interpolation into the VRC method are described. With these methods, the per-iteration cost, in terms of the number of potential energy surface evaluations, of the VRC method is reduced while the high algorithmic efficiency is maintained. PMID:27179465

  4. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E.; Valentine, John D.; Beauchamp, Brock R.

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  5. Theoretical analysis of integral neutron transport equation using collision probability method with quadratic flux approach

    SciTech Connect

    Shafii, Mohammad Ali Meidianti, Rahma Wildian, Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto

    2014-09-30

    Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.

  6. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGESBeta

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  7. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  8. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    SciTech Connect

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  9. Analytical error analysis of Clifford gates by the fault-path tracer method

    NASA Astrophysics Data System (ADS)

    Janardan, Smitha; Tomita, Yu; Gutiérrez, Mauricio; Brown, Kenneth R.

    2016-08-01

    We estimate the success probability of quantum protocols composed of Clifford operations in the presence of Pauli errors. Our method is derived from the fault-point formalism previously used to determine the success rate of low-distance error correction codes. Here we apply it to a wider range of quantum protocols and identify circuit structures that allow for efficient calculation of the exact success probability and even the final distribution of output states. As examples, we apply our method to the Bernstein-Vazirani algorithm and the Steane [[7,1,3

  10. Analytical error analysis of Clifford gates by the fault-path tracer method

    NASA Astrophysics Data System (ADS)

    Janardan, Smitha; Tomita, Yu; Gutiérrez, Mauricio; Brown, Kenneth R.

    2016-05-01

    We estimate the success probability of quantum protocols composed of Clifford operations in the presence of Pauli errors. Our method is derived from the fault-point formalism previously used to determine the success rate of low-distance error correction codes. Here we apply it to a wider range of quantum protocols and identify circuit structures that allow for efficient calculation of the exact success probability and even the final distribution of output states. As examples, we apply our method to the Bernstein-Vazirani algorithm and the Steane [[7,1,3

  11. Tunnel-construction methods and foraging path of a fossorial herbivore, Geomys bursarius

    USGS Publications Warehouse

    Andersen, Douglas C.

    1988-01-01

    The fossorial rodent Geomys bursarius excavates tunnels to find and gain access to belowground plant parts. This is a study of how the foraging path of this animal, as denoted by feeding-tunnel systems constructed within experimental gardens, reflects both adaptive behavior and constraints associated with the fossorial lifestyle. The principal method of tunnel construction involves the end-to-end linking of short, linear segments whose directionalities are bimodal, but symmetrically distributed about 0°. The sequence of construction of left- and right-directed segments is random, and segments tend to be equal in length. The resulting tunnel advances, zigzag-fashion, along a single heading. This linearity, and the tendency for branches to be orthogonal to the originating tunnel, are consistent with the search path predicted for a "harvesting animal" (Pyke, 1978) from optimal-foraging theory. A suite of physical and physiological constraints on the burrowing process, however, may be responsible for this geometric pattern. That is, by excavating in the most energy-efficient manner, G. bursarius automatically creates the basic components to an optimal-search path. The general search pattern was not influenced by habitat quality (plant density). Branch origins are located more often than expected at plants, demonstrating area-restricted search, a tactic commonly noted in aboveground foragers. The potential trade-offs between construction methods that minimize energy cost and those that minimize vulnerability to predators are discussed.

  12. Acoustic method for measuring the sound speed of gases over small path lengths.

    PubMed

    Olfert, J S; Checkel, M D; Koch, C R

    2007-05-01

    Acoustic "phase shift" methods have been used in the past to accurately measure the sound speed of gases. In this work, a phase shift method for measuring the sound speed of gases over small path lengths is presented. We have called this method the discrete acoustic wave and phase detection (DAWPD) method. Experimental results show that the DAWPD method gives accurate (+/-3.2 ms) and predictable measurements that closely match theory. The sources of uncertainty in the DAWPD method are examined and it is found that ultrasonic reflections and changes in the frequency ratio of the transducers (the ratio of driving frequency to resonant frequency) can be major sources of error. Experimentally, it is shown how these sources of uncertainty can be minimized. PMID:17552851

  13. Path Analysis and Residual Plotting as Methods of Environmental Scanning in Higher Education: An Illustration with Applications and Enrollments.

    ERIC Educational Resources Information Center

    Morcol, Goktug; McLaughlin, Gerald W.

    1990-01-01

    The study proposes using path analysis and residual plotting as methods supporting environmental scanning in strategic planning for higher education institutions. Path models of three levels of independent variables are developed. Dependent variables measuring applications and enrollments at Virginia Polytechnic Institute and State University are…

  14. Probability-Based Determination Methods for Service Waiting in Service-Oriented Computing Environments

    NASA Astrophysics Data System (ADS)

    Zeng, Sen; Huang, Shuangxi; Liu, Yang

    Cooperative business processes (CBP)-based service-oriented enterprise networks (SOEN) are emerging with the significant advances of enterprise integration and service-oriented architecture. The performance prediction and optimization for CBP-based SOEN is very complex. To meet these challenges, one of the key points is to try to reduce an abstract service’s waiting number of its physical services. This paper introduces a probability-based determination method (PBDM) of an abstract service’ waiting number, M l , and time span, τ i , for its physical services. The determination of M i and τ i is according to the physical services’ arriving rule and their overall performance’s distribution functions. In PBDM, the arriving probability of the physical services with the best overall performance value is a pre-defined reliability. PBDM has made use of the information of the physical services’ arriving rule and performance distribution functions thoroughly, which will improve the computational efficiency for the scheme design and performance optimization of the collaborative business processes in service-oriented computing environments.

  15. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    PubMed

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well. PMID:26107223

  16. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  17. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method

    PubMed Central

    Alani, Amir M.; Faramarzi, Asaad

    2015-01-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  18. Predicting the Probability of Failure of Cementitious Sewer Pipes Using Stochastic Finite Element Method.

    PubMed

    Alani, Amir M; Faramarzi, Asaad

    2015-06-01

    In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092

  19. Probability estimation with machine learning methods for dichotomous and multicategory outcome: applications.

    PubMed

    Kruppa, Jochen; Liu, Yufeng; Diener, Hans-Christian; Holste, Theresa; Weimar, Christian; König, Inke R; Ziegler, Andreas

    2014-07-01

    Machine learning methods are applied to three different large datasets, all dealing with probability estimation problems for dichotomous or multicategory data. Specifically, we investigate k-nearest neighbors, bagged nearest neighbors, random forests for probability estimation trees, and support vector machines with the kernels of Bessel, linear, Laplacian, and radial basis type. Comparisons are made with logistic regression. The dataset from the German Stroke Study Collaboration with dichotomous and three-category outcome variables allows, in particular, for temporal and external validation. The other two datasets are freely available from the UCI learning repository and provide dichotomous outcome variables. One of them, the Cleveland Clinic Foundation Heart Disease dataset, uses data from one clinic for training and from three clinics for external validation, while the other, the thyroid disease dataset, allows for temporal validation by separating data into training and test data by date of recruitment into study. For dichotomous outcome variables, we use receiver operating characteristics, areas under the curve values with bootstrapped 95% confidence intervals, and Hosmer-Lemeshow-type figures as comparison criteria. For dichotomous and multicategory outcomes, we calculated bootstrap Brier scores with 95% confidence intervals and also compared them through bootstrapping. In a supplement, we provide R code for performing the analyses and for random forest analyses in Random Jungle, version 2.1.0. The learning machines show promising performance over all constructed models. They are simple to apply and serve as an alternative approach to logistic or multinomial logistic regression analysis. PMID:24989843

  20. An Alternative Teaching Method of Conditional Probabilities and Bayes' Rule: An Application of the Truth Table

    ERIC Educational Resources Information Center

    Satake, Eiki; Vashlishan Murray, Amy

    2015-01-01

    This paper presents a comparison of three approaches to the teaching of probability to demonstrate how the truth table of elementary mathematical logic can be used to teach the calculations of conditional probabilities. Students are typically introduced to the topic of conditional probabilities--especially the ones that involve Bayes' rule--with…

  1. A chain-of-states acceleration method for the efficient location of minimum energy paths

    SciTech Connect

    Hernández, E. R. Herrero, C. P.; Soler, J. M.

    2015-11-14

    We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.

  2. Nuclear spin selection rules for reactive collision systems by the spin-modification probability method.

    PubMed

    Park, Kisam; Light, John C

    2007-12-14

    The spin-modification probability (SMP) method, which provides fundamental and detailed quantitative information on the nuclear spin selection rules, is discussed more systematically and generalized for reactive collision systems involving more than one configuration of reactant and product molecules, explicitly taking account of the conservation of the overall nuclear spin symmetry as well as the conservation of the total nuclear spin angular momentum, under the assumption of no nuclear hyperfine interaction. The values of SMP once calculated can be used for any system of identical nuclei of any spin as long as the system has the corresponding nuclear spin symmetry. The values of SMP calculated for simple systems can also be used for more complex systems containing several kinds of identical nuclei or various isotopomers. The generalized formulation of statistical scattering theory which can easily represent various rearrangement mechanisms is also presented. PMID:18081384

  3. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method

    PubMed Central

    Fogel, Allison R.; Rosenberg, Jason C.; Lehman, Frank M.; Kuperberg, Gina R.; Patel, Aniruddh D.

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5–9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such ‘authentic cadence’ melody was matched to a ‘non-cadential’ (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of

  4. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method.

    PubMed

    Fogel, Allison R; Rosenberg, Jason C; Lehman, Frank M; Kuperberg, Gina R; Patel, Aniruddh D

    2015-01-01

    Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in

  5. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  6. Calculating splittings between energy levels of different symmetry using path-integral methods.

    PubMed

    Mátyus, Edit; Althorpe, Stuart C

    2016-03-21

    It is well known that path-integral methods can be used to calculate the energy splitting between the ground and the first excited state. Here we show that this approach can be generalized to give the splitting patterns between all the lowest energy levels from different symmetry blocks that lie below the first-excited totally symmetric state. We demonstrate this property numerically for some two-dimensional models. The approach is likely to be useful for computing rovibrational energy levels and tunnelling splittings in floppy molecules and gas-phase clusters. PMID:27004864

  7. Partially coherent scattering in stellar chromospheres. II - The first-order escape probability method. III - A second-order escape probability method

    NASA Technical Reports Server (NTRS)

    Gayley, K. G.

    1992-01-01

    Approximate analytic expressions are derived for resonance-line wing diagnostics, accounting for frequency redistribution effects, for homogeneous slabs, and slabs with a constant Planck function gradient. Resonance-line emission profiles from a simplified conceptual standpoint are described in order to elucidate the basic physical parameters of the line-forming layers prior to the performance of detailed numerical calculations. An approximate analytic expression is derived for the dependence on stellar surface gravity of the location of the Ca II and Mg II resonance-line profile peaks. An approximate radiative transfer equation using generalized second-order escape probabilities, applicable even in the presence of nearly coherent scattering in the damping wings of resonance lines, is derived. Approximate analytic solutions that can be applied in special regimes and achieve good agreement with accurate numerical results are found.

  8. Refinement of a Method for Identifying Probable Archaeological Sites from Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Comer, Douglas C.; Priebe, Carey E.; Sussman, Daniel; Chen, Li

    2012-01-01

    To facilitate locating archaeological sites before they are compromised or destroyed, we are developing approaches for generating maps of probable archaeological sites, through detecting subtle anomalies in vegetative cover, soil chemistry, and soil moisture by analyzing remotely sensed data from multiple sources. We previously reported some success in this effort with a statistical analysis of slope, radar, and Ikonos data (including tasseled cap and NDVI transforms) with Student's t-test. We report here on new developments in our work, performing an analysis of 8-band multispectral Worldview-2 data. The Worldview-2 analysis begins by computing medians and median absolute deviations for the pixels in various annuli around each site of interest on the 28 band difference ratios. We then use principle components analysis followed by linear discriminant analysis to train a classifier which assigns a posterior probability that a location is an archaeological site. We tested the procedure using leave-one-out cross validation with a second leave-one-out step to choose parameters on a 9,859x23,000 subset of the WorldView-2 data over the western portion of Ft. Irwin, CA, USA. We used 100 known non-sites and trained one classifier for lithic sites (n=33) and one classifier for habitation sites (n=16). We then analyzed convex combinations of scores from the Archaeological Predictive Model (APM) and our scores. We found that that the combined scores had a higher area under the ROC curve than either individual method, indicating that including WorldView-2 data in analysis improved the predictive power of the provided APM.

  9. Using computerized tomography to determine ionospheric structures. Part 2, A method using curved paths to increase vertical resolution

    SciTech Connect

    Vittitoe, C.N.

    1993-08-01

    A method is presented to unfold the two-dimensional vertical structure in electron density by using data on the total electron content for a series of paths through the ionosphere. The method uses a set of orthonormal basis functions to represent the vertical structure and takes advantage of curved paths and the eikonical equation to reduce the number of iterations required for a solution. Curved paths allow a more thorough probing of the ionosphere with a given set of transmitter and receiver positions. The approach can be directly extended to more complex geometries.

  10. Coupled-cluster method: A lattice-path-based subsystem approximation scheme for quantum lattice models

    SciTech Connect

    Bishop, R. F.; Li, P. H. Y.

    2011-04-15

    An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1/2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.

  11. Coupled-cluster method: A lattice-path-based subsystem approximation scheme for quantum lattice models

    NASA Astrophysics Data System (ADS)

    Bishop, R. F.; Li, P. H. Y.

    2011-04-01

    An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1)/(2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.

  12. Systems and methods for managing shared-path instrumentation and irradiation targets in a nuclear reactor

    DOEpatents

    Heinold, Mark R.; Berger, John F.; Loper, Milton H.; Runkle, Gary A.

    2015-12-29

    Systems and methods permit discriminate access to nuclear reactors. Systems provide penetration pathways to irradiation target loading and offloading systems, instrumentation systems, and other external systems at desired times, while limiting such access during undesired times. Systems use selection mechanisms that can be strategically positioned for space sharing to connect only desired systems to a reactor. Selection mechanisms include distinct paths, forks, diverters, turntables, and other types of selectors. Management methods with such systems permits use of the nuclear reactor and penetration pathways between different systems and functions, simultaneously and at only distinct desired times. Existing TIP drives and other known instrumentation and plant systems are useable with access management systems and methods, which can be used in any nuclear plant with access restrictions.

  13. On the method of logarithmic cumulants for parametric probability density function estimation.

    PubMed

    Krylov, Vladimir A; Moser, Gabriele; Serpico, Sebastiano B; Zerubia, Josiane

    2013-10-01

    Parameter estimation of probability density functions is one of the major steps in the area of statistical image and signal processing. In this paper we explore several properties and limitations of the recently proposed method of logarithmic cumulants (MoLC) parameter estimation approach which is an alternative to the classical maximum likelihood (ML) and method of moments (MoM) approaches. We derive the general sufficient condition for a strong consistency of the MoLC estimates which represents an important asymptotic property of any statistical estimator. This result enables the demonstration of the strong consistency of MoLC estimates for a selection of widely used distribution families originating from (but not restricted to) synthetic aperture radar image processing. We then derive the analytical conditions of applicability of MoLC to samples for the distribution families in our selection. Finally, we conduct various synthetic and real data experiments to assess the comparative properties, applicability and small sample performance of MoLC notably for the generalized gamma and K families of distributions. Supervised image classification experiments are considered for medical ultrasound and remote-sensing SAR imagery. The obtained results suggest that MoLC is a feasible and computationally fast yet not universally applicable alternative to MoM. MoLC becomes especially useful when the direct ML approach turns out to be unfeasible. PMID:23799694

  14. Methods for estimating dispersal probabilities and related parameters using marked animals

    USGS Publications Warehouse

    Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.

    2001-01-01

    Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.

  15. Particle path tracking method in two- and three-dimensional continuously rotating detonation engines

    NASA Astrophysics Data System (ADS)

    Zhou, Rui; Wu, Dan; Liu, Yan; Wang, Jian-Ping

    2014-12-01

    The particle path tracking method is proposed and used in two-dimensional (2D) and three-dimensional (3D) numerical simulations of continuously rotating detonation engines (CRDEs). This method is used to analyze the combustion and expansion processes of the fresh particles, and the thermodynamic cycle process of CRDE. In a 3D CRDE flow field, as the radius of the annulus increases, the no-injection area proportion increases, the non-detonation proportion decreases, and the detonation height decreases. The flow field parameters on the 3D mid annulus are different from in the 2D flow field under the same chamber size. The non-detonation proportion in the 3D flow field is less than in the 2D flow field. In the 2D and 3D CRDE, the paths of the flow particles have only a small fluctuation in the circumferential direction. The numerical thermodynamic cycle processes are qualitatively consistent with the three ideal cycle models, and they are right in between the ideal F—J cycle and ideal ZND cycle. The net mechanical work and thermal efficiency are slightly smaller in the 2D simulation than in the 3D simulation. In the 3D CRDE, as the radius of the annulus increases, the net mechanical work is almost constant, and the thermal efficiency increases. The numerical thermal efficiencies are larger than F—J cycle, and much smaller than ZND cycle.

  16. Using Multiple Methods to teach ASTR 101 students the Path of the Sun and Shadows

    NASA Astrophysics Data System (ADS)

    D'Cruz, Noella L.

    2015-01-01

    It seems surprising that non-science major introductory astronomy students find the daily path of the Sun and shadows created by the Sun challenging to learn even though both can be easily observed (provided students do not look directly at the Sun). In order for our students to master the relevant concepts, we have usually used lecture, a lecture tutorial (from Prather, et al.) followed by think-pair-share questions, a planetarium presentation and an animation from the Nebraska Astronomy Applet Project to teach these topics. We cover these topics in a lecture-only, one semester introductory astronomy course at Joliet Junior College. Feedback from our Spring 2014 students indicated that the planetarium presentation was the most helpful in learning the path of the Sun while none of the four teaching methods was helpful when learning about shadows cast by the Sun. Our students did not find the lecture tutorial to be much help even though such tutorials have been proven to promote deep conceptual change. In Fall 2014, we continued to use these four methods, but we modified how we teach both topics so our students could gain more from the tutorial. We hoped our modifications would cause students to have a better overall grasp of the concepts. After our regular lecture, we gave a shorter than usual planetarium presentation on the path of the Sun and we asked students to work through a shadow activity from Project Astro materials. Then students completed the lecture tutorial and some think-pair-share questions. After this, we asked students to predict the Sun's path on certain days of the year and we used the planetarium projector to show them how well their predictions matched up. We ended our coverage of these topics by asking students a few more think-pair-share questions. In our poster, we will present our approach to teaching these topics in Fall 2014, how our Fall 2014 students feel about our teaching strategies and how they fared on related test questions.

  17. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    NASA Astrophysics Data System (ADS)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  18. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  19. Method of Transverse Displacements Formulation for Calculating the HF Radio Wave Propagation Paths. Statement of the Problem and Preliminary Results

    NASA Astrophysics Data System (ADS)

    Nosikov, I. A.; Bessarab, P. F.; Klimenko, M. V.

    2016-06-01

    Fundamentals of the method of transverse displacements for calculating the HF radio-wave propagation paths are presented. The method is based on the direct variational principle for the optical path functional, but is not reduced to solving the Euler—Lagrange equations. Instead, the initial guess given by an ordered set of points is transformed successively into a ray path, while its endpoints corresponding to the positions of the transmitter and the receiver are kept fixed throughout the entire iteration process. The results of calculation by the method of transverse displacements are compared with known analytical solutions. The importance of using only transverse displacements of the ray path in the optimization procedure is also demonstrated.

  20. New method for estimating greenhouse gas emissions from livestock buildings using open-path FTIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Briz, Susana; Barrancos, José; Nolasco, Dácil; Melián, Gladys; Padrón, Eleazar; Pérez, Nemesio

    2009-09-01

    It is widely known that methane, together with carbon dioxide, is one of the most effective greenhouse gases contributing to climate global change. According to EMEP/CORINAIR Emission Inventory Guidebook1, around 25% of global CH4 emissions originate from animal husbandry, especially from enteric fermentation. However, uncertainties in the CH4 emission factors provided by EMEP/CORINAIR are around 30%. For this reason, works addressed to calculate emissions experimentally are so important to improve the estimations of emissions due to livestock and to calculate emission factors not included in this inventory. FTIR spectroscopy has been frequently used in different methodologies to measure emission rates in many environmental problems. Some of these methods are based on dispersion modelling techniques, wind data, micrometeorological measurements or the release of a tracer gas. In this work, a new method for calculating emission rates from livestock buildings applying Open-Path FTIR spectroscopy is proposed. This method is inspired by the accumulation chamber method used for CO2 flux measurements in volcanic areas or CH4 flux in wetlands and aquatic ecosystems. The process is the following: livestock is outside the building, which is ventilated in order to reduce concentrations to ambient level. Once the livestock has been put inside, the building is completely closed and the concentrations of gases emitted by livestock begin to increase. The Open-Path system measures the concentration evolution of gases such as CO2, CH4, NH3 and H2O. The slope of the concentration evolution function, dC/dt, at initial time is directly proportional to the flux of the corresponding gas. This method has been applied in a cow shed in the surroundings of La Laguna, Tenerife Island, Spain). As expected, evolutions of gas concentrations reveal that the livestock building behaves like an accumulation chamber. Preliminary results show that the CH4 emission factor is lower than the proposed by

  1. Extrapolation of extreme sea levels: incorporation of Over-Threshold-Modeling to the Joint Probability Method

    NASA Astrophysics Data System (ADS)

    Mazas, Franck; Hamm, Luc; Kergadallan, Xavier

    2013-04-01

    In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.

  2. The path of rolling elements in defective bearings: Observations, analysis and methods to estimate spall size

    NASA Astrophysics Data System (ADS)

    Moazen Ahmadi, Alireza; Howard, Carl Q.; Petersen, Dick

    2016-03-01

    This paper describes the experimental investigation of the vibration signature generated by rolling elements entering and exiting a notch defect in the outer raceway of a bearing. The vibration responses of the bearing housing and the displacement between the raceways were measured and analyzed. These key features can be used to estimate the size of the defect and is demonstrated in this paper for a range of shaft speeds and bearing loads. It is shown that existing defect size estimation methods include assumptions that describe the path of the rolling elements in the defect zone leading to poor estimates of defect size. A new defect size estimation method is proposed and is shown to be accurate for estimating a range of notch defect geometries over a range of shaft speeds and applied loads.

  3. The simplified self-consistent probabilities method for percolation and its application to interdependent networks

    NASA Astrophysics Data System (ADS)

    Feng, Ling; Pineda Monterola, Christopher; Hu, Yanqing

    2015-06-01

    Interdependent networks in areas ranging from infrastructure to economics are ubiquitous in our society, and the study of their cascading behaviors using percolation theory has attracted much attention in recent years. To analyze the percolation phenomena of these systems, different mathematical frameworks have been proposed, including generating functions and eigenvalues, and others. These different frameworks approach phase transition behaviors from different angles and have been very successful in shaping the different quantities of interest, including critical threshold, size of the giant component, order of phase transition, and the dynamics of cascading. These methods also vary in their mathematical complexity in dealing with interdependent networks that have additional complexity in terms of the correlation among different layers of networks or links. In this work, we review a particular approach of simple, self-consistent probability equations, and we illustrate that this approach can greatly simplify the mathematical analysis for systems ranging from single-layer network to various different interdependent networks. We give an overview of the detailed framework to study the nature of the critical phase transition, the value of the critical threshold, and the size of the giant component for these different systems.

  4. A Comparison of EPI Sampling, Probability Sampling, and Compact Segment Sampling Methods for Micro and Small Enterprises

    PubMed Central

    Chao, Li-Wei; Szrek, Helena; Peltzer, Karl; Ramlagan, Shandir; Fleming, Peter; Leite, Rui; Magerman, Jesswill; Ngwenya, Godfrey B.; Pereira, Nuno Sousa; Behrman, Jere

    2011-01-01

    Finding an efficient method for sampling micro- and small-enterprises (MSEs) for research and statistical reporting purposes is a challenge in developing countries, where registries of MSEs are often nonexistent or outdated. This lack of a sampling frame creates an obstacle in finding a representative sample of MSEs. This study uses computer simulations to draw samples from a census of businesses and non-businesses in the Tshwane Municipality of South Africa, using three different sampling methods: the traditional probability sampling method, the compact segment sampling method, and the World Health Organization’s Expanded Programme on Immunization (EPI) sampling method. Three mechanisms by which the methods could differ are tested, the proximity selection of respondents, the at-home selection of respondents, and the use of inaccurate probability weights. The results highlight the importance of revisits and accurate probability weights, but the lesser effect of proximity selection on the samples’ statistical properties. PMID:22582004

  5. New method for path-length equalization of long single-mode fibers for interferometry

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Monnier, J. D.; Ozdowy, K.; Woillez, J.; Perrin, G.

    2014-07-01

    The ability to use single mode (SM) fibers for beam transport in optical interferometry offers practical advantages over conventional long vacuum pipes. One challenge facing fiber transport is maintaining constant differential path length in an environment where environmental thermal variations can lead to cm-level variations from day to night. We have fabricated three composite cables of length 470 m, each containing 4 copper wires and 3 SM fibers that operate at the astronomical H band (1500-1800 nm). Multiple fibers allow us to test performance of a circular core fiber (SMF28), a panda-style polarization-maintaining (PM) fiber, and a lastly a specialty dispersion-compensated PM fiber. We will present experimental results using precision electrical resistance measurements of the of a composite cable beam transport system. We find that the application of 1200 W over a 470 m cable causes the optical path difference in air to change by 75 mm (+/- 2 mm) and the resistance to change from 5.36 to 5.50Ω. Additionally, we show control of the dispersion of 470 m of fiber in a single polarization using white light interference fringes (λc=1575 nm, Δλ=75 nm) using our method.

  6. Series Expansion Method for Asymmetrical Percolation Models with Two Connection Probabilities

    NASA Astrophysics Data System (ADS)

    Inui, Norio; Komatsu, Genichi; Kameoka, Koichi

    2000-01-01

    In order to study the solvability of the percolation model based on Guttmann and Enting's conjecture, the power series for the percolation probability in the form of ∑nHn(q)pn is examined. Although the power series is given by calculating inverse of the transfer-matrix in principle, it is very hard to obtain the inverse matrix containing many complex polynomials as elements. We introduce a new series expansion technique which does not necessitate inverse operation for the transfer-matrix.By using the new procedure, we derive the series of the asymmetrical percolation probability including the isotropic percolation probability as a special case.

  7. Probability density function method for variable-density pressure-gradient-driven turbulence and mixing

    SciTech Connect

    Bakosi, Jozsef; Ristorcelli, Raymond J

    2010-01-01

    Probability density function (PDF) methods are extended to variable-density pressure-gradient-driven turbulence. We apply the new method to compute the joint PDF of density and velocity in a non-premixed binary mixture of different-density molecularly mixing fluids under gravity. The full time-evolution of the joint PDF is captured in the highly non-equilibrium flow: starting from a quiescent state, transitioning to fully developed turbulence and finally dissipated by molecular diffusion. High-Atwood-number effects (as distinguished from the Boussinesq case) are accounted for: both hydrodynamic turbulence and material mixing are treated at arbitrary density ratios, with the specific volume, mass flux and all their correlations in closed form. An extension of the generalized Langevin model, originally developed for the Lagrangian fluid particle velocity in constant-density shear-driven turbulence, is constructed for variable-density pressure-gradient-driven flows. The persistent small-scale anisotropy, a fundamentally 'non-Kolmogorovian' feature of flows under external acceleration forces, is captured by a tensorial diffusion term based on the external body force. The material mixing model for the fluid density, an active scalar, is developed based on the beta distribution. The beta-PDF is shown to be capable of capturing the mixing asymmetry and that it can accurately represent the density through transition, in fully developed turbulence and in the decay process. The joint model for hydrodynamics and active material mixing yields a time-accurate evolution of the turbulent kinetic energy and Reynolds stress anisotropy without resorting to gradient diffusion hypotheses, and represents the mixing state by the density PDF itself, eliminating the need for dubious mixing measures. Direct numerical simulations of the homogeneous Rayleigh-Taylor instability are used for model validation.

  8. Observations of cloud liquid water path over oceans: Optical and microwave remote sensing methods

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Rossow, William B.

    1994-01-01

    Published estimates of cloud liquid water path (LWP) from satellite-measured microwave radiation show little agreement, even about the relative magnitudes of LWP in the tropics and midlatitudes. To understand these differences and to obtain more reliable estimate, optical and microwave LWP retrieval methods are compared using the International Satellite Cloud Climatology Project (ISCCP) and special sensor microwave/imager (SSM/I) data. Errors in microwave LWP retrieval associated with uncertainties in surface, atmosphere, and cloud properties are assessed. Sea surface temperature may not produce great LWP errors, if accurate contemporaneous measurements are used in the retrieval. An uncertainty of estimated near-surface wind speed as high as 2 m/s produces uncertainty in LWP of about 5 mg/sq cm. Cloud liquid water temperature has only a small effect on LWP retrievals (rms errors less than 2 mg/sq cm), if errors in the temperature are less than 5 C; however, such errors can produce spurious variations of LWP with latitude and season. Errors in atmospheric column water vapor (CWV) are strongly coupled with errors in LWP (for some retrieval methods) causing errors as large as 30 mg/sq cm. Because microwave radiation is much less sensitive to clouds with small LWP (less than 7 mg/sq cm) than visible wavelength radiation, the microwave results are very sensitive to the process used to separate clear and cloudy conditions. Different cloud detection sensitivities in different microwave retrieval methods bias estimated LWP values. Comparing ISCCP and SSM/I LWPs, we find that the two estimated values are consistent in global, zonal, and regional means for warm, nonprecipitating clouds, which have average LWP values of about 5 mg/sq cm and occur much more frequently than precipitating clouds. Ice water path (IWP) can be roughly estimated from the differences between ISCCP total water path and SSM/I LWP for cold, nonprecipitating clouds. IWP in the winter hemisphere is about

  9. A reliable acoustic path: Physical properties and a source localization method

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo

    2012-12-01

    The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.

  10. Application of LAMBDA Method to the Calculation of Slant Path Wet Vapor Content of GPS Signals

    NASA Astrophysics Data System (ADS)

    Huang, Shan-Qi; Wang, Jie-Xian; Wang, Xiao-Ya; Chen, Jun-Ping

    2009-10-01

    With the improvement of the GPS data processing techniques and calculating accuracy, the GPS has been increasingly and widely applied to atmospheric science. In the research on GPS meteorology the slant path wet vapor content (SWV) is one of the significant parameters. In the light of the problem of poorer real time, which existed in the method proposed by Song Shuli et al. in 2004, for directly calculating the SWV by means of the precise ephemeris, IGS clock error and observed value of the LC combination after the cycle skip processing, the LAMBDA method which has more mature application to the city virtual reference station (VRS) is applied to the problem of the processing of ambiguity search. Through the trial calculation of data, it is tested and verified that the method is feasible and there is a better uniformity when the calculated result is projected into the zenith direction. The atmospheric delay in the vertical direction obtained by using this method is compared with the result of the GAMIT or the BERNESE, with the result showing that the accuracy of the coincidence of the result of the method with that of the BERNESE is generally smaller than 1.5 cm and the accuracy of the coincidence of the result of the method with that of the GAMIT is generally smaller than 10 cm.

  11. Causal-Path Local Time-Stepping in the discontinuous Galerkin method for Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Angulo, L. D.; Alvarez, J.; Teixeira, F. L.; Pantoja, M. F.; Garcia, S. G.

    2014-01-01

    We introduce a novel local time-stepping technique for marching-in-time algorithms. The technique is denoted as Causal-Path Local Time-Stepping (CPLTS) and it is applied for two time integration techniques: fourth-order low-storage explicit Runge-Kutta (LSERK4) and second-order Leap-Frog (LF2). The CPLTS method is applied to evolve Maxwell's curl equations using a Discontinuous Galerkin (DG) scheme for the spatial discretization. Numerical results for LF2 and LSERK4 are compared with analytical solutions and the Montseny's LF2 technique. The results show that the CPLTS technique improves the dispersive and dissipative properties of LF2-LTS scheme.

  12. Error reduction methods for integrated-path differential-absorption lidar measurements.

    PubMed

    Chen, Jeffrey R; Numata, Kenji; Wu, Stewart T

    2012-07-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log". PMID:22772254

  13. Solving spatial inverse problems using the probability perturbation method: An S-GEMS implementation

    NASA Astrophysics Data System (ADS)

    Li, Ting; Caers, Jef

    2008-09-01

    The probability perturbation method (PPM) is introduced as a flexible and efficient sampling technique for generating inverse solutions under a given prior geological constraint (prior model). In this paper, we present a methodology for producing software code that runs PPM within a public domain geostatistical software called the Stanford Geostatistical Earth Modeling Software (S-GEMS). The challenge in creating such code lies in the great diversity of forward models as well as prior models that can be handled by the PPM. Therefore, our software solution must be highly flexible and extensible such that it can be tailored to the various applications at hand. Our implementation has two main objectives: (1) to create an integrated working environment which provides users easy access to functionalities of the PPM through a general user interface as well as visualize results; (2) allow the users to plug-in their application specific code into the PPM algorithm workflow. We provide a two-part solution. The first part, which is hard-coded in S-GEMS as a plug-in module, runs the Dekker-Brent optimization algorithm to control the parameter perturbation needed for the inversion. It generates the PPM user interface and allows visualization of the spatial domain of interest using S-GEMS graphics capability. The second part is coded in object-oriented Python scripts and is used to control the PPM execution in S-GEMS. Users can program their particular needs in scripts and load them into S-GEMS as part of the PPM workflow. The same mechanism can be used to extend the capabilities of PPM itself by implementing new PPM variants in Python and making them a part of the base class hierarchy. Case studies are used to demonstrate the flexibility of our code. This approach requires the user to adapt only a small amount of python code, without modifying, or re-compiling the core S-GEMS code.

  14. Use of Monte Carlo Methods for Evaluating Probability of False Positives in Archaeoastronomy Alignments

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Ambruster, C.; Jewell, E.

    2012-01-01

    Simple Monte Carlo simulations can assist both the cultural astronomy researcher while the Research Design is developed and the eventual evaluators of research products. Following the method we describe allows assessment of the probability for there to be false positives associated with a site. Even seemingly evocative alignments may be meaningless, depending on the site characteristics and the number of degrees of freedom the researcher allows. In many cases, an observer may have to limit comments to "it is nice and it might be culturally meaningful, rather than saying "it is impressive so it must mean something". We describe a basic language with an associated set of attributes to be cataloged. These can be used to set up simple Monte Carlo simulations for a site. Without collaborating cultural evidence, or trends with similar attributes (for example a number of sites showing the same anticipatory date), the Monte Carlo simulation can be used as a filter to establish the likeliness that the observed alignment phenomena is the result of random factors. Such analysis may temper any eagerness to prematurely attribute cultural meaning to an observation. For the most complete description of an archaeological site, we urge researchers to capture the site attributes in a manner which permits statistical analysis. We also encourage cultural astronomers to record that which does not work, and that which may seem to align, but has no discernable meaning. Properly reporting situational information as tenets of the research design will reduce the subjective nature of archaeoastronomical interpretation. Examples from field work will be discussed.

  15. Comparison of micrometeorological methods using open-path optical instruments for measuring methane emission from agricultural sites

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we evaluated the accuracies of two relatively new micrometeorological methods using open-path tunable diode laser absorption spectrometers: vertical radial plume mapping method (US EPA OTM-10) and the backward Lagragian stochastic method (Wintrax®). We have evaluated the accuracy of t...

  16. A probability evaluation method of early deterioration condition for the critical components of wind turbine generator systems

    NASA Astrophysics Data System (ADS)

    Hu, Yaogang; Li, Hui; Liao, Xinglin; Song, Erbing; Liu, Haitao; Chen, Z.

    2016-08-01

    This study determines the early deterioration condition of critical components for a wind turbine generator system (WTGS). Due to the uncertainty nature of the fluctuation and intermittence of wind, early deterioration condition evaluation poses a challenge to the traditional vibration-based condition monitoring methods. Considering the its thermal inertia and strong anti-interference capacity, temperature characteristic parameters as a deterioration indication cannot be adequately disturbed by the uncontrollable noise and uncertainty nature of wind. This paper provides a probability evaluation method of early deterioration condition for critical components based only on temperature characteristic parameters. First, the dynamic threshold of deterioration degree function was proposed by analyzing the operational data between temperature and rotor speed. Second, a probability evaluation method of early deterioration condition was presented. Finally, two cases showed the validity of the proposed probability evaluation method in detecting early deterioration condition and in tracking their further deterioration for the critical components.

  17. Method- and species-specific detection probabilities of fish occupancy in Arctic lakes: Implications for design and management

    USGS Publications Warehouse

    Haynes, Trevor B.; Rosenberger, Amanda E.; Lindberg, Mark S.; Whitman, Matthew; Schmutz, Joel A.

    2013-01-01

    Studies examining species occurrence often fail to account for false absences in field sampling. We investigate detection probabilities of five gear types for six fish species in a sample of lakes on the North Slope, Alaska. We used an occupancy modeling approach to provide estimates of detection probabilities for each method. Variation in gear- and species-specific detection probability was considerable. For example, detection probabilities for the fyke net ranged from 0.82 (SE = 0.05) for least cisco (Coregonus sardinella) to 0.04 (SE = 0.01) for slimy sculpin (Cottus cognatus). Detection probabilities were also affected by site-specific variables such as depth of the lake, year, day of sampling, and lake connection to a stream. With the exception of the dip net and shore minnow traps, each gear type provided the highest detection probability of at least one species. Results suggest that a multimethod approach may be most effective when attempting to sample the entire fish community of Arctic lakes. Detection probability estimates will be useful for designing optimal fish sampling and monitoring protocols in Arctic lakes.

  18. Path durations for use in the stochastic‐method simulation of ground motions

    USGS Publications Warehouse

    Boore, David M.; Thompson, Eric M.

    2014-01-01

    The stochastic method of ground‐motion simulation assumes that the energy in a target spectrum is spread over a duration DT. DT is generally decomposed into the duration due to source effects (DS) and to path effects (DP). For the most commonly used source, seismological theory directly relates DS to the source corner frequency, accounting for the magnitude scaling of DT. In contrast, DP is related to propagation effects that are more difficult to represent by analytic equations based on the physics of the process. We are primarily motivated to revisit DT because the function currently employed by many implementations of the stochastic method for active tectonic regions underpredicts observed durations, leading to an overprediction of ground motions for a given target spectrum. Further, there is some inconsistency in the literature regarding which empirical duration corresponds to DT. Thus, we begin by clarifying the relationship between empirical durations and DT as used in the first author’s implementation of the stochastic method, and then we develop a new DP relationship. The new DP function gives significantly longer durations than in the previous DP function, but the relative contribution of DP to DT still diminishes with increasing magnitude. Thus, this correction is more important for small events or subfaults of larger events modeled with the stochastic finite‐fault method.

  19. A routing path construction method for key dissemination messages in sensor networks.

    PubMed

    Moon, Soo Young; Cho, Tae Ho

    2014-01-01

    Authentication is an important security mechanism for detecting forged messages in a sensor network. Each cluster head (CH) in dynamic key distribution schemes forwards a key dissemination message that contains encrypted authentication keys within its cluster to next-hop nodes for the purpose of authentication. The forwarding path of the key dissemination message strongly affects the number of nodes to which the authentication keys in the message are actually distributed. We propose a routing method for the key dissemination messages to increase the number of nodes that obtain the authentication keys. In the proposed method, each node selects next-hop nodes to which the key dissemination message will be forwarded based on secret key indexes, the distance to the sink node, and the energy consumption of its neighbor nodes. The experimental results show that the proposed method can increase by 50-70% the number of nodes to which authentication keys in each cluster are distributed compared to geographic and energy-aware routing (GEAR). In addition, the proposed method can detect false reports earlier by using the distributed authentication keys, and it consumes less energy than GEAR when the false traffic ratio (FTR) is ≥ 10%. PMID:25136649

  20. A SNP discovery method to assess variant allele probability from next-generation resequencing data

    PubMed Central

    Shen, Yufeng; Wan, Zhengzheng; Coarfa, Cristian; Drabek, Rafal; Chen, Lei; Ostrowski, Elizabeth A.; Liu, Yue; Weinstock, George M.; Wheeler, David A.; Gibbs, Richard A.; Yu, Fuli

    2010-01-01

    Accurate identification of genetic variants from next-generation sequencing (NGS) data is essential for immediate large-scale genomic endeavors such as the 1000 Genomes Project, and is crucial for further genetic analysis based on the discoveries. The key challenge in single nucleotide polymorphism (SNP) discovery is to distinguish true individual variants (occurring at a low frequency) from sequencing errors (often occurring at frequencies orders of magnitude higher). Therefore, knowledge of the error probabilities of base calls is essential. We have developed Atlas-SNP2, a computational tool that detects and accounts for systematic sequencing errors caused by context-related variables in a logistic regression model learned from training data sets. Subsequently, it estimates the posterior error probability for each substitution through a Bayesian formula that integrates prior knowledge of the overall sequencing error probability and the estimated SNP rate with the results from the logistic regression model for the given substitutions. The estimated posterior SNP probability can be used to distinguish true SNPs from sequencing errors. Validation results show that Atlas-SNP2 achieves a false-positive rate of lower than 10%, with an ∼5% or lower false-negative rate. PMID:20019143

  1. Binomial Test Method for Determining Probability of Detection Capability for Fracture Critical Applications

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2011-01-01

    The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.

  2. New Method for the Characterization of 3D Preferential Flow Paths at the Field

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Preferential flow paths development in the field is the result of the complex interaction of multiple processes relating to the soil's structure, moisture condition, stress level, and biological activities. Visualizing and characterizing the cracking behavior and preferential paths evolution with so...

  3. Adapting the nudged elastic band method for determining minimum-energy paths of chemical reactions in enzymes.

    PubMed

    Xie, Li; Liu, Haiyan; Yang, Weitao

    2004-05-01

    Optimization of reaction paths for enzymatic systems is a challenging problem because such systems have a very large number of degrees of freedom and many of these degrees are flexible. To meet this challenge, an efficient, robust and general approach is presented based on the well-known nudged elastic band reaction path optimization method with the following extensions: (1) soft spectator degrees of freedom are excluded from path definitions by using only inter-atomic distances corresponding to forming/breaking bonds in a reaction; (2) a general transformation of the distances is defined to treat multistep reactions without knowing the partitioning of steps in advance; (3) a multistage strategy, in which path optimizations are carried out for reference systems with gradually decreasing rigidity, is developed to maximize the opportunity of obtaining continuously changing environments along the path. We demonstrate the applicability of the approach using the acylation reaction of type A beta-lactamase as an example. The reaction mechanism investigated involves four elementary reaction steps, eight forming/breaking bonds. We obtained a continuous minimum energy path without any assumption on reaction coordinates, or on the possible sequence or the concertedness of chemical events. We expect our approach to have general applicability in the modeling of enzymatic reactions with quantum mechanical/molecular mechanical models. PMID:15267723

  4. An adaptive compromise programming method for multi-objective path optimization

    NASA Astrophysics Data System (ADS)

    Li, Rongrong; Leung, Yee; Lin, Hui; Huang, Bo

    2013-04-01

    Network routing problems generally involve multiple objectives which may conflict one another. An effective way to solve such problems is to generate a set of Pareto-optimal solutions that is small enough to be handled by a decision maker and large enough to give an overview of all possible trade-offs among the conflicting objectives. To accomplish this, the present paper proposes an adaptive method based on compromise programming to assist decision makers in identifying Pareto-optimal paths, particularly for non-convex problems. This method can provide an unbiased approximation of the Pareto-optimal alternatives by adaptively changing the origin and direction of search in the objective space via the dynamic updating of the largest unexplored region till an appropriately structured Pareto front is captured. To demonstrate the efficacy of the proposed methodology, a case study is carried out for the transportation of dangerous goods in the road network of Hong Kong with the support of geographic information system. The experimental results confirm the effectiveness of the approach.

  5. Creation of a global land cover and a probability map through a new map integration method

    NASA Astrophysics Data System (ADS)

    Kinoshita, Tsuguki; Iwao, Koki; Yamagata, Yoshiki

    2014-05-01

    Global land cover maps are widely used for assessment and in research of various kinds, and in recent years have also come to be used for socio-economic forecasting. However, existing maps are not very accurate, and differences between maps also contribute to their unreliability. Improving the accuracy of global land cover maps would benefit a number of research fields. In this paper, we propose a methodology for using ground truth data to integrate existing global land cover maps. We checked the accuracy of a map created using this methodology and found that the accuracy of the new map is 74.6%, which is 3% higher than for existing maps. We then created a 0.5-min latitude by 0.5-min longitude probability map. This map indicates the probability of agreement between the category class of the new map and truth data. Using the map, we found that the probabilities of cropland and grassland are relatively low compared with other land cover types. This appears to be because the definitions of cropland differ between maps, so the accuracy may be improved by including pasture and idle plot categories.

  6. A Galerkin-based formulation of the probability density evolution method for general stochastic finite element systems

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Vissarion; Kalogeris, Ioannis

    2016-05-01

    The present paper proposes a Galerkin finite element projection scheme for the solution of the partial differential equations (pde's) involved in the probability density evolution method, for the linear and nonlinear static analysis of stochastic systems. According to the principle of preservation of probability, the probability density evolution of a stochastic system is expressed by its corresponding Fokker-Planck (FP) stochastic partial differential equation. Direct integration of the FP equation is feasible only for simple systems with a small number of degrees of freedom, due to analytical and/or numerical intractability. However, rewriting the FP equation conditioned to the random event description, a generalized density evolution equation (GDEE) can be obtained, which can be reduced to a one dimensional pde. Two Galerkin finite element method schemes are proposed for the numerical solution of the resulting pde's, namely a time-marching discontinuous Galerkin scheme and the StreamlineUpwind/Petrov Galerkin (SUPG) scheme. In addition, a reformulation of the classical GDEE is proposed, which implements the principle of probability preservation in space instead of time, making this approach suitable for the stochastic analysis of finite element systems. The advantages of the FE Galerkin methods and in particular the SUPG over finite difference schemes, like the modified Lax-Wendroff, which is the most frequently used method for the solution of the GDEE, are illustrated with numerical examples and explored further.

  7. MODIFICATION OF THE 14C MOST-PROBABLE-NUMBER METHOD FOR USE WITH NONPOLAR AND VOLATILE SUBSTRATES

    EPA Science Inventory

    A method was developed to allow the use of volatile and nonpolar substrates in (14)C most-probable-number tests. Naphthalene or hexadecane was sorbed to filter paper disks and submerged in minimal medium. The procedure reduced the volatilization of the substrates while allowing t...

  8. A DIRECT METHOD TO DETERMINE THE PARALLEL MEAN FREE PATH OF SOLAR ENERGETIC PARTICLES WITH ADIABATIC FOCUSING

    SciTech Connect

    He, H.-Q.; Wan, W. E-mail: wanw@mail.iggcas.ac.cn

    2012-03-01

    The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

  9. A novel method to identify herds with an increased probability of disease introduction due to animal trade.

    PubMed

    Frössling, Jenny; Nusinovici, Simon; Nöremark, Maria; Widgren, Stefan; Lindberg, Ann

    2014-11-15

    In the design of surveillance, there is often a desire to target high risk herds. Such risk-based approaches result in better allocation of resources and improve the performance of surveillance activities. For many contagious animal diseases, movement of live animals is a main route of transmission, and because of this, herds that purchase many live animals or have a large contact network due to trade can be seen as a high risk stratum of the population. This paper presents a new method to assess herd disease risk in animal movement networks. It is an improvement to current network measures that takes direction, temporal order, and also movement size and probability of disease into account. In the study, the method was used to calculate a probability of disease ratio (PDR) of herds in simulated datasets, and of real herds based on animal movement data from dairy herds included in a bulk milk survey for Coxiella burnetii. Known differences in probability of disease are easily incorporated in the calculations and the PDR was calculated while accounting for regional differences in probability of disease, and also by applying equal probability of disease throughout the population. Each herd's increased probability of disease due to purchase of animals was compared to both the average herd and herds within the same risk stratum. The results show that the PDR is able to capture the different circumstances related to disease prevalence and animal trade contact patterns. Comparison of results based on inclusion or exclusion of differences in risk also highlights how ignoring such differences can influence the ability to correctly identify high risk herds. The method shows a potential to be useful for risk-based surveillance, in the classification of herds in control programmes or to represent influential contacts in risk factor studies. PMID:25139432

  10. Integrated Multi Path Model to Calculate Radionuclide Release From a Repository Using Wavelet Galerkin Method

    SciTech Connect

    Nasif, Hesham R.; Neyama, Atsushi

    2002-07-01

    This work represents a WIRS code developed using wavelet Galerkin method to solve radionuclide transport model in near field and far field of a repository for high-level radioactive waste. After overpack failure, radionuclides diffuse through the bentonite buffer material to the water bearing fracture around the repository transport horizontally through this geosphere then transport vertically through the major water conducting fault (MWCF) reach the biosphere. The radionuclides transport barriers considered in this model are engineered barrier system (EBS), geosphere, and MWCF. Hydraulic conductivity of the bentonite is more than three orders of magnitude smaller than that of the surrounding host rock, so the only transport mechanism through EBS is diffusion. In the host rock, the problem is of advection-diffusion type with highly varying parameters from one medium to other due to the variability in length, transmissivity and other transport-relevant properties of the transport paths. Daubechies' wavelet is used as a basis function to solve the nonlinear partial differential equations arising from the model formulation of the radionuclides transport. Since the scaling functions are compactly supported, only a finite number of the connection coefficients are nonzero. The resultant matrix has a block diagonal structure, which can be inverted easily. We tested our WGM algorithm with several problems to verify the model. The solutions are very accurate with a proper selection of Daubechies' order and dilation order. The solution is very accurate at the interfaces where the radionuclide concentration exhibits very steep gradients. (authors)

  11. Burnup calculation by the method of first-flight collision probabilities using average chords prior to the first collision

    SciTech Connect

    Karpushkin, T. Yu.

    2012-12-15

    A technique to calculate the burnup of materials of cells and fuel assemblies using the matrices of first-flight neutron collision probabilities rebuilt at a given burnup step is presented. A method to rebuild and correct first collision probability matrices using average chords prior to the first neutron collision, which are calculated with the help of geometric modules of constructed stochastic neutron trajectories, is described. Results of calculation of the infinite multiplication factor for elementary cells with a modified material composition compared to the reference one as well as calculation of material burnup in the cells and fuel assemblies of a VVER-1000 are presented.

  12. Comparing Two Different Methods to Evaluate Convariance-Matrix of Debris Orbit State in Collision Probability Estimation

    NASA Astrophysics Data System (ADS)

    Cheng, Haowen; Liu, Jing; Xu, Yang

    The evaluation of convariance-matrix is an inevitable step when estimating collision probability based on the theory. Generally, there are two different methods to compute convariance-matrix. One is so-called Tracking-Delta-Fitting method, first introduced when estimating the collision probability using TLE catalogue data, in which convariance-matrix is evaluated by fitting series of differences between propagated orbits of formal data and updated orbit data. In the second method, convariance-matrix is evaluated in the process of orbit determination. Both of the methods has there difficulties when introduced in collision probability estimation. In the first method, the value of convariance-matrix is evaluated based only on historical orbit data, ignoring information of latest orbit determination. As a result, the accuracy of the method strongly depends on the stability of convariance-matrix of latest updated orbit. In the second method, the evaluation of convariance-matrix is acceptable when the determined orbit satisfies weighted-least-square estimation, depending on the accuracy of observation error convariance, which is hard to obtain in real application, evaluated by analyzing the residuals of orbit determination in our research. In this paper we provided numerical tests to compare these two methods. A simulation of cataloguing objects in LEO, MEO and GEO regions has been carried out for a time span of 3 months. The influence of orbit maneuver has been included in GEO objects cataloguing simulation. For LEO objects cataloguing, the effect of atmospheric density variation has also been considered. At the end of the paper accuracies of evaluated convariance-matrix and estimated collision probability have been tested and compared.

  13. A novel path generation method of onsite 5-axis surface inspection using the dual-cubic NURBS representation

    NASA Astrophysics Data System (ADS)

    Li, Wen-long; Wang, Gang; Zhang, Gang; Pang, Chang-tao; Yin, Zhou-pin

    2016-09-01

    Onsite surface inspection with a touch probe or a laser scanner is a promising technique for efficiently evaluating surface profile error. The existing work of 5-axis inspection path generation bears a serious drawback, however, as there is a drastic orientation change of the inspection axis. Such a sudden change may exceed the stringent physical limit on the speed and acceleration of the rotary motions of the machine tool. In this paper, we propose a novel path generation method for onsite 5-axis surface inspection. The accessibility cones are defined and used to generate alternative interference-free inspection directions. Then, the control points are optimally calculated to obtain the dual-cubic non-Uniform rational B-splines (NURBS) curves, which respectively determine the path points and the axis vectors in an inspection path. The generated inspection path is smooth and non-interference, which deals with the ‘mutation and shake’ problems and guarantees a stable speed and acceleration of machine tool rotary motions. Its feasibility and validity is verified by the onsite inspection experiments of impeller blade.

  14. A method for estimating the probability of lightning causing a methane ignition in an underground mine

    SciTech Connect

    Sacks, H.K.; Novak, T.

    2008-03-15

    During the past decade, several methane/air explosions in abandoned or sealed areas of underground coal mines have been attributed to lightning. Previously published work by the authors showed, through computer simulations, that currents from lightning could propagate down steel-cased boreholes and ignite explosive methane/air mixtures. The presented work expands on the model and describes a methodology based on IEEE Standard 1410-2004 to estimate the probability of an ignition. The methodology provides a means to better estimate the likelihood that an ignition could occur underground and, more importantly, allows the calculation of what-if scenarios to investigate the effectiveness of engineering controls to reduce the hazard. The computer software used for calculating fields and potentials is also verified by comparing computed results with an independently developed theoretical model of electromagnetic field propagation through a conductive medium.

  15. Reaction path determination for quantum mechanical/molecular mechanical modeling of enzyme reactions by combining first order and second order ``chain-of-replicas'' methods

    NASA Astrophysics Data System (ADS)

    Cisneros, G. Andrés; Liu, Haiyan; Lu, Zhenyu; Yang, Weitao

    2005-03-01

    A two-step procedure for the determination of reaction paths in enzyme systems is presented. This procedure combines two chain-of-states methods: a quantum mechanical/molecular mechanical (QM/MM) implementation of the nudged elastic band (NEB) method and a second order parallel path optimizer method both recently developed in our laboratory. In the first step, a reaction path determination is performed with the NEB method, along with a restrained minimization procedure for the MM environment to obtain a first approximation to the reaction path. In the second step, the calculated path is refined with the parallel path optimizer method. By combining these two methods the reaction paths are determined accurately, and in addition, the number of path optimization iterations are significantly reduced. This procedure is tested by calculating both steps of the isomerization of 2-oxo-4-hexenedioate by 4-oxalocrotonate tautomerase, which have been previously determined by our group. The calculated paths agree with the previously reported results and we obtain a reduction of 45%-55% in the number of path optimization cycles.

  16. Points on the Path to Probability.

    ERIC Educational Resources Information Center

    Kiernan, James F.

    2001-01-01

    Presents the problem of points and the development of the binomial triangle, or Pascal's triangle. Examines various attempts to solve this problem to give students insight into the nature of mathematical discovery. (KHR)

  17. Fitting a distribution to censored contamination data using Markov Chain Monte Carlo methods and samples selected with unequal probabilities.

    PubMed

    Williams, Michael S; Ebel, Eric D

    2014-11-18

    The fitting of statistical distributions to chemical and microbial contamination data is a common application in risk assessment. These distributions are used to make inferences regarding even the most pedestrian of statistics, such as the population mean. The reason for the heavy reliance on a fitted distribution is the presence of left-, right-, and interval-censored observations in the data sets, with censored observations being the result of nondetects in an assay, the use of screening tests, and other practical limitations. Considerable effort has been expended to develop statistical distributions and fitting techniques for a wide variety of applications. Of the various fitting methods, Markov Chain Monte Carlo methods are common. An underlying assumption for many of the proposed Markov Chain Monte Carlo methods is that the data represent independent and identically distributed (iid) observations from an assumed distribution. This condition is satisfied when samples are collected using a simple random sampling design. Unfortunately, samples of food commodities are generally not collected in accordance with a strict probability design. Nevertheless, pseudosystematic sampling efforts (e.g., collection of a sample hourly or weekly) from a single location in the farm-to-table continuum are reasonable approximations of a simple random sample. The assumption that the data represent an iid sample from a single distribution is more difficult to defend if samples are collected at multiple locations in the farm-to-table continuum or risk-based sampling methods are employed to preferentially select samples that are more likely to be contaminated. This paper develops a weighted bootstrap estimation framework that is appropriate for fitting a distribution to microbiological samples that are collected with unequal probabilities of selection. An example based on microbial data, derived by the Most Probable Number technique, demonstrates the method and highlights the

  18. Method for Evaluation of Outage Probability on Random Access Channel in Mobile Communication Systems

    NASA Astrophysics Data System (ADS)

    Kollár, Martin

    2012-05-01

    In order to access the cell in all mobile communication technologies a so called random-access procedure is used. For example in GSM this is represented by sending the CHANNEL REQUEST message from Mobile Station (MS) to Base Transceiver Station (BTS) which is consequently forwarded as an CHANNEL REQUIRED message to the Base Station Controller (BSC). If the BTS decodes some noise on the Random Access Channel (RACH) as random access by mistake (so- called ‘phantom RACH') then it is a question of pure coincidence which èstablishment cause’ the BTS thinks to have recognized. A typical invalid channel access request or phantom RACH is characterized by an IMMEDIATE ASSIGNMENT procedure (assignment of an SDCCH or TCH) which is not followed by sending an ESTABLISH INDICATION from MS to BTS. In this paper a mathematical model for evaluation of the Power RACH Busy Threshold (RACHBT) in order to guaranty in advance determined outage probability on RACH is described and discussed as well. It focuses on Global System for Mobile Communications (GSM) however the obtained results can be generalized on remaining mobile technologies (ie WCDMA and LTE).

  19. Burst suppression probability algorithms: state-space methods for tracking EEG burst suppression

    NASA Astrophysics Data System (ADS)

    Chemali, Jessica; Ching, ShiNung; Purdon, Patrick L.; Solt, Ken; Brown, Emery N.

    2013-10-01

    Objective. Burst suppression is an electroencephalogram pattern in which bursts of electrical activity alternate with an isoelectric state. This pattern is commonly seen in states of severely reduced brain activity such as profound general anesthesia, anoxic brain injuries, hypothermia and certain developmental disorders. Devising accurate, reliable ways to quantify burst suppression is an important clinical and research problem. Although thresholding and segmentation algorithms readily identify burst suppression periods, analysis algorithms require long intervals of data to characterize burst suppression at a given time and provide no framework for statistical inference. Approach. We introduce the concept of the burst suppression probability (BSP) to define the brain's instantaneous propensity of being in the suppressed state. To conduct dynamic analyses of burst suppression we propose a state-space model in which the observation process is a binomial model and the state equation is a Gaussian random walk. We estimate the model using an approximate expectation maximization algorithm and illustrate its application in the analysis of rodent burst suppression recordings under general anesthesia and a patient during induction of controlled hypothermia. Main result. The BSP algorithms track burst suppression on a second-to-second time scale, and make possible formal statistical comparisons of burst suppression at different times. Significance. The state-space approach suggests a principled and informative way to analyze burst suppression that can be used to monitor, and eventually to control, the brain states of patients in the operating room and in the intensive care unit.

  20. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley

    2007-01-01

    Propulsion ground test facilities face the daily challenge of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Over the last decade NASA s propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and exceeded the capabilities of numerous test facility and test article components. A logistic regression mathematical modeling technique has been developed to predict the probability of successfully completing a rocket propulsion test. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),.., X(sub k) to a binary or dichotomous dependent variable Y, where Y can only be one of two possible outcomes, in this case Success or Failure of accomplishing a full duration test. The use of logistic regression modeling is not new; however, modeling propulsion ground test facilities using logistic regression is both a new and unique application of the statistical technique. Results from this type of model provide project managers with insight and confidence into the effectiveness of rocket propulsion ground testing.

  1. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    ERIC Educational Resources Information Center

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  2. Cluster membership probabilities from proper motions and multi-wavelength photometric catalogues. I. Method and application to the Pleiades cluster

    NASA Astrophysics Data System (ADS)

    Sarro, L. M.; Bouy, H.; Berihuete, A.; Bertin, E.; Moraux, E.; Bouvier, J.; Cuillandre, J.-C.; Barrado, D.; Solano, E.

    2014-03-01

    Context. With the advent of deep wide surveys, large photometric and astrometric catalogues of literally all nearby clusters and associations have been produced. The unprecedented accuracy and sensitivity of these data sets and their broad spatial, temporal and wavelength coverage make obsolete the classical membership selection methods that were based on a handful of colours and luminosities. We present a new technique designed to take full advantage of the high dimensionality (photometric, astrometric, temporal) of such a survey to derive self-consistent and robust membership probabilities of the Pleiades cluster. Aims: We aim at developing a methodology to infer membership probabilities to the Pleiades cluster from the DANCe multidimensional astro-photometric data set in a consistent way throughout the entire derivation. The determination of the membership probabilities has to be applicable to censored data and must incorporate the measurement uncertainties into the inference procedure. Methods: We use Bayes' theorem and a curvilinear forward model for the likelihood of the measurements of cluster members in the colour-magnitude space, to infer posterior membership probabilities. The distribution of the cluster members proper motions and the distribution of contaminants in the full multidimensional astro-photometric space is modelled with a mixture-of-Gaussians likelihood. Results: We analyse several representation spaces composed of the proper motions plus a subset of the available magnitudes and colour indices. We select two prominent representation spaces composed of variables selected using feature relevance determination techniques based in Random Forests, and analyse the resulting samples of high probability candidates. We consistently find lists of high probability (p > 0.9975) candidates with ≈1000 sources, 4 to 5 times more than obtained in the most recent astro-photometric studies of the cluster. Conclusions: Multidimensional data sets require

  3. A double-index method to classify Kuroshio intrusion paths in the Luzon Strait

    NASA Astrophysics Data System (ADS)

    Huang, Zhida; Liu, Hailong; Hu, Jianyu; Lin, Pengfei

    2016-06-01

    A double index (DI), which is made up of two sub-indices, is proposed to describe the spatial patterns of the Kuroshio intrusion and mesoscale eddies west to the Luzon Strait, based on satellite altimeter data. The area-integrated negative and positive geostrophic vorticities are defined as the Kuroshio warm eddy index (KWI) and the Kuroshio cold eddy index (KCI), respectively. Three typical spatial patterns are identified by the DI: the Kuroshio warm eddy path (KWEP), the Kuroshio cold eddy path (KCEP), and the leaking path. The primary features of the DI and three patterns are further investigated and compared with previous indices. The effects of the integrated area and the algorithm of the integration are investigated in detail. In general, the DI can overcome the problem of previously used indices in which the positive and negative geostrophic vorticities cancel each other out. Thus, the proportions of missing and misjudged events are greatly reduced using the DI. The DI, as compared with previously used indices, can better distinguish the paths of the Kuroshio intrusion and can be used for further research.

  4. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-01

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method. PMID:27608986

  5. A Method for Calculating the Probability of Successfully Completing a Rocket Propulsion Ground Test

    NASA Technical Reports Server (NTRS)

    Messer, Bradley P.

    2004-01-01

    Propulsion ground test facilities face the daily challenges of scheduling multiple customers into limited facility space and successfully completing their propulsion test projects. Due to budgetary and schedule constraints, NASA and industry customers are pushing to test more components, for less money, in a shorter period of time. As these new rocket engine component test programs are undertaken, the lack of technology maturity in the test articles, combined with pushing the test facilities capabilities to their limits, tends to lead to an increase in facility breakdowns and unsuccessful tests. Over the last five years Stennis Space Center's propulsion test facilities have performed hundreds of tests, collected thousands of seconds of test data, and broken numerous test facility and test article parts. While various initiatives have been implemented to provide better propulsion test techniques and improve the quality, reliability, and maintainability of goods and parts used in the propulsion test facilities, unexpected failures during testing still occur quite regularly due to the harsh environment in which the propulsion test facilities operate. Previous attempts at modeling the lifecycle of a propulsion component test project have met with little success. Each of the attempts suffered form incomplete or inconsistent data on which to base the models. By focusing on the actual test phase of the tests project rather than the formulation, design or construction phases of the test project, the quality and quantity of available data increases dramatically. A logistic regression model has been developed form the data collected over the last five years, allowing the probability of successfully completing a rocket propulsion component test to be calculated. A logistic regression model is a mathematical modeling approach that can be used to describe the relationship of several independent predictor variables X(sub 1), X(sub 2),..,X(sub k) to a binary or dichotomous

  6. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Because advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  7. Probability of identification (POI): a statistical model for the validation of qualitative botanical identification methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A qualitative botanical identification method (BIM) is an analytical procedure which returns a binary result (1 = Identified, 0 = Not Identified). A BIM may be used by a buyer, manufacturer, or regulator to determine whether a botanical material being tested is the same as the target (desired) mate...

  8. A Method for Estimating the Probability of Floating Gate Prompt Charge Loss in a Radiation Environment

    NASA Technical Reports Server (NTRS)

    Edmonds, L. D.

    2016-01-01

    Since advancing technology has been producing smaller structures in electronic circuits, the floating gates in modern flash memories are becoming susceptible to prompt charge loss from ionizing radiation environments found in space. A method for estimating the risk of a charge-loss event is given.

  9. EQUAL OPTICAL PATH BEAM SPLITTERS BY USE OF AMPLITUDE-SPLITTING AND WAVEFRONT-SPLITTING METHODS FOR PENCIL BEAM INTERFEROMETER.

    SciTech Connect

    QIAN,S.TAKACS,P.

    2003-08-03

    A beam splitter to create two separated parallel beams is a critical unit of a pencil beam interferometer, for example the long trace profiler (LTP). The operating principle of the beam splitter can be based upon either amplitude-splitting (AS) or wavefront-splitting (WS). For precision measurements with the LTP, an equal optical path system with two parallel beams is desired. Frequency drift of the light source in a non-equal optical path system will cause the interference fringes to drift. An equal optical path prism beam splitter with an amplitude-splitting (AS-EBS) beam splitter and a phase shift beam splitter with a wavefront-splitting (WS-PSBS) are introduced. These beam splitters are well suited to the stability requirement for a pencil beam interferometer due to the characteristics of monolithic structure and equal optical path. Several techniques to produce WS-PSBS by hand are presented. In addition, the WS-PSBS using double thin plates, made from microscope cover plates, has great advantages of economy, convenience, availability and ease of adjustment over other beam splitting methods. Comparison of stability measurements made with the AS-EBS, WS-PSBS, and other beam splitters is presented.

  10. Fast and accurate global multiphase arrival tracking: the irregular shortest-path method in a 3-D spherical earth model

    NASA Astrophysics Data System (ADS)

    Huang, Guo-Jiao; Bai, Chao-Ying; Greenhalgh, Stewart

    2013-09-01

    The traditional grid/cell-based wavefront expansion algorithms, such as the shortest path algorithm, can only find the first arrivals or multiply reflected (or mode converted) waves transmitted from subsurface interfaces, but cannot calculate the other later reflections/conversions having a minimax time path. In order to overcome the above limitations, we introduce the concept of a stationary minimax time path of Fermat's Principle into the multistage irregular shortest path method. Here we extend it from Cartesian coordinates for a flat earth model to global ray tracing of multiple phases in a 3-D complex spherical earth model. The ray tracing results for 49 different kinds of crustal, mantle and core phases show that the maximum absolute traveltime error is less than 0.12 s and the average absolute traveltime error is within 0.09 s when compared with the AK135 theoretical traveltime tables for a 1-D reference model. Numerical tests in terms of computational accuracy and CPU time consumption indicate that the new scheme is an accurate, efficient and a practical way to perform 3-D multiphase arrival tracking in regional or global traveltime tomography.

  11. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  12. Most Probable Number Rapid Viability PCR Method to Detect Viable Spores of Bacillus anthracis in Swab Samples

    SciTech Connect

    Letant, S E; Kane, S R; Murphy, G A; Alfaro, T M; Hodges, L; Rose, L; Raber, E

    2008-05-30

    This note presents a comparison of Most-Probable-Number Rapid Viability (MPN-RV) PCR and traditional culture methods for the quantification of Bacillus anthracis Sterne spores in macrofoam swabs generated by the Centers for Disease Control and Prevention (CDC) for a multi-center validation study aimed at testing environmental swab processing methods for recovery, detection, and quantification of viable B. anthracis spores from surfaces. Results show that spore numbers provided by the MPN RV-PCR method were in statistical agreement with the CDC conventional culture method for all three levels of spores tested (10{sup 4}, 10{sup 2}, and 10 spores) even in the presence of dirt. In addition to detecting low levels of spores in environmental conditions, the MPN RV-PCR method is specific, and compatible with automated high-throughput sample processing and analysis protocols.

  13. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    NASA Astrophysics Data System (ADS)

    Schmidt, Matthew; Constable, Steve; Ing, Christopher; Roy, Pierre-Nicholas

    2014-06-01

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψT, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen for cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 - 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.

  14. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    SciTech Connect

    Schmidt, Matthew; Constable, Steve; Ing, Christopher; Roy, Pierre-Nicholas

    2014-06-21

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen for cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.

  15. Bootstrapping & Separable Monte Carlo Simulation Methods Tailored for Efficient Assessment of Probability of Failure of Dynamic Systems

    NASA Astrophysics Data System (ADS)

    Jehan, Musarrat

    The response of a dynamic system is random. There is randomness in both the applied loads and the strength of the system. Therefore, to account for the uncertainty, the safety of the system must be quantified using its probability of survival (reliability). Monte Carlo Simulation (MCS) is a widely used method for probabilistic analysis because of its robustness. However, a challenge in reliability assessment using MCS is that the high computational cost limits the accuracy of MCS. Haftka et al. [2010] developed an improved sampling technique for reliability assessment called separable Monte Carlo (SMC) that can significantly increase the accuracy of estimation without increasing the cost of sampling. However, this method was applied to time-invariant problems involving two random variables only. This dissertation extends SMC to random vibration problems with multiple random variables. This research also develops a novel method for estimation of the standard deviation of the probability of failure of a structure under static or random vibration. The method is demonstrated on quarter car models and a wind turbine. The proposed method is validated using repeated standard MCS.

  16. FicTrac: a visual method for tracking spherical motion and generating fictive animal paths.

    PubMed

    Moore, Richard J D; Taylor, Gavin J; Paulk, Angelique C; Pearson, Thomas; van Swinderen, Bruno; Srinivasan, Mandyam V

    2014-03-30

    Studying how animals interface with a virtual reality can further our understanding of how attention, learning and memory, sensory processing, and navigation are handled by the brain, at both the neurophysiological and behavioural levels. To this end, we have developed a novel vision-based tracking system, FicTrac (Fictive path Tracking software), for estimating the path an animal makes whilst rotating an air-supported sphere using only input from a standard camera and computer vision techniques. We have found that the accuracy and robustness of FicTrac outperforms a low-cost implementation of a standard optical mouse-based approach for generating fictive paths. FicTrac is simple to implement for a wide variety of experimental configurations and, importantly, is fast to execute, enabling real-time sensory feedback for behaving animals. We have used FicTrac to record the behaviour of tethered honeybees, Apis mellifera, whilst presenting visual stimuli in both open-loop and closed-loop experimental paradigms. We found that FicTrac could accurately register the fictive paths of bees as they walked towards bright green vertical bars presented on an LED arena. Using FicTrac, we have demonstrated closed-loop visual fixation in both the honeybee and the fruit fly, Drosophila melanogaster, establishing the flexibility of this system. FicTrac provides the experimenter with a simple yet adaptable system that can be combined with electrophysiological recording techniques to study the neural mechanisms of behaviour in a variety of organisms, including walking vertebrates. PMID:24491637

  17. Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods

    NASA Astrophysics Data System (ADS)

    Whyte, Refael; Streeter, Lee; Cree, Michael J.; Dorrington, Adrian A.

    2015-11-01

    Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.

  18. Mean-free-paths in concert and chamber music halls and the correct method for calibrating dodecahedral sound sources.

    PubMed

    Beranek, Leo L; Nishihara, Noriko

    2014-01-01

    The Eyring/Sabine equations assume that in a large irregular room a sound wave travels in straight lines from one surface to another, that the surfaces have an average sound absorption coefficient αav, and that the mean-free-path between reflections is 4 V/Stot where V is the volume of the room and Stot is the total area of all of its surfaces. No account is taken of diffusivity of the surfaces. The 4 V/Stot relation was originally based on experimental determinations made by Knudsen (Architectural Acoustics, 1932, pp. 132-141). This paper sets out to test the 4 V/Stot relation experimentally for a wide variety of unoccupied concert and chamber music halls with seating capacities from 200 to 5000, using the measured sound strengths Gmid and reverberation times RT60,mid. Computer simulations of the sound fields for nine of these rooms (of varying shapes) were also made to determine the mean-free-paths by that method. The study shows that 4 V/Stot is an acceptable relation for mean-free-paths in the Sabine/Eyring equations except for halls of unusual shape. Also demonstrated is the proper method for calibrating the dodecahedral sound source used for measuring the sound strength G, i.e., the reverberation chamber method. PMID:24437762

  19. Changes in flexibility upon binding: Application of the self-consistent pair contact probability method to protein-protein interactions

    NASA Astrophysics Data System (ADS)

    Canino, Lawrence S.; Shen, Tongye; McCammon, J. Andrew

    2002-12-01

    We extend the self-consistent pair contact probability method to the evaluation of the partition function for a protein complex at thermodynamic equilibrium. Specifically, we adapt the method for multichain models and introduce a parametrization for amino acid-specific pairwise interactions. This method is similar to the Gaussian network model but allows for the adjusting of the strengths of native state contacts. The method is first validated on a high resolution x-ray crystal structure of bovine Pancreatic Phospholipase A2 by comparing calculated B-factors with reported values. We then examine binding-induced changes in flexibility in protein-protein complexes, comparing computed results with those obtained from x-ray crystal structures and molecular dynamics simulations. In particular, we focus on the mouse acetylcholinesterase:fasciculin II and the human α-thrombin:thrombomodulin complexes.

  20. Estimation of the detection probability for Yangtze finless porpoises (Neophocaena phocaenoides asiaeorientalis) with a passive acoustic method.

    PubMed

    Akamatsu, T; Wang, D; Wang, K; Li, S; Dong, S; Zhao, X; Barlow, J; Stewart, B S; Richlen, M

    2008-06-01

    Yangtze finless porpoises were surveyed by using simultaneous visual and acoustical methods from 6 November to 13 December 2006. Two research vessels towed stereo acoustic data loggers, which were used to store the intensity and sound source direction of the high frequency sonar signals produced by finless porpoises at detection ranges up to 300 m on each side of the vessel. Simple stereo beam forming allowed the separation of distinct biosonar sound source, which enabled us to count the number of vocalizing porpoises. Acoustically, 204 porpoises were detected from one vessel and 199 from the other vessel in the same section of the Yangtze River. Visually, 163 and 162 porpoises were detected from two vessels within 300 m of the vessel track. The calculated detection probability using acoustic method was approximately twice that for visual detection for each vessel. The difference in detection probabilities between the two methods was caused by the large number of single individuals that were missed by visual observers. However, the sizes of large groups were underestimated by using the acoustic methods. Acoustic and visual observations complemented each other in the accurate detection of porpoises. The use of simple, relatively inexpensive acoustic monitoring systems should enhance population surveys of free-ranging, echolocating odontocetes. PMID:18537391

  1. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  2. Comparison of Colilert-18 with miniaturised most probable number method for monitoring of Escherichia coli in bathing water.

    PubMed

    Tiwari, Ananda; Niemelä, Seppo I; Vepsäläinen, Asko; Rapala, Jarkko; Kalso, Seija; Pitkänen, Tarja

    2016-02-01

    The purpose of this equivalence study was to compare an alternative method, Colilert-18 Quanti-Tray (ISO 9308-2) with the European bathing water directive (2006/7/EC) reference method, the miniaturised most probable number (MMPN) method (ISO 9308-3), for the analysis of Escherichia coli. Six laboratories analysed a total of 263 bathing water samples in Finland. The comparison was carried out according to ISO 17994:2004. The recovery of E. coli using the Colilert-18 method was 7.0% and 8.6% lower than that of the MMPN method after 48 hours and 72 hours of incubation, respectively. The confirmation rate of presumptive E. coli-positive wells in the Colilert-18 and MMPN methods was high (97.8% and 98.0%, respectively). However, the testing of presumptive E. coli-negative but coliform bacteria-positive (yellow but not fluorescent) Colilert-18 wells revealed 7.3% false negative results. There were more false negatives in the naturally contaminated waters than in the samples spiked with waste water. The difference between the recovery of Colilert-18 and the MMPN method was considered not significant, and subsequently the methods are considered as equivalent for bathing water quality monitoring in Finland. Future bathing water method equivalence verification studies may use the data reported herein. The laboratories should make sure that any wells showing even minor fluorescence will be determined as positive for E. coli. PMID:26837836

  3. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    NASA Astrophysics Data System (ADS)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  4. Rapid, single-step most-probable-number method for enumerating fecal coliforms in effluents from sewage treatment plants

    NASA Technical Reports Server (NTRS)

    Munoz, E. F.; Silverman, M. P.

    1979-01-01

    A single-step most-probable-number method for determining the number of fecal coliform bacteria present in sewage treatment plant effluents is discussed. A single growth medium based on that of Reasoner et al. (1976) and consisting of 5.0 gr. proteose peptone, 3.0 gr. yeast extract, 10.0 gr. lactose, 7.5 gr. NaCl, 0.2 gr. sodium lauryl sulfate, and 0.1 gr. sodium desoxycholate per liter is used. The pH is adjusted to 6.5, and samples are incubated at 44.5 deg C. Bacterial growth is detected either by measuring the increase with time in the electrical impedance ratio between the innoculated sample vial and an uninnoculated reference vial or by visual examination for turbidity. Results obtained by the single-step method for chlorinated and unchlorinated effluent samples are in excellent agreement with those obtained by the standard method. It is suggested that in automated treatment plants impedance ratio data could be automatically matched by computer programs with the appropriate dilution factors and most probable number tables already in the computer memory, with the corresponding result displayed as fecal coliforms per 100 ml of effluent.

  5. Free energy of conformational transition paths in biomolecules: The string method and its application to myosin VI

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin; Vanden-Eijnden, Eric

    2011-01-01

    A set of techniques developed under the umbrella of the string method is used in combination with all-atom molecular dynamics simulations to analyze the conformation change between the prepowerstroke (PPS) and rigor (R) structures of the converter domain of myosin VI. The challenges specific to the application of these techniques to such a large and complex biomolecule are addressed in detail. These challenges include (i) identifying a proper set of collective variables to apply the string method, (ii) finding a suitable initial string, (iii) obtaining converged profiles of the free energy along the transition path, (iv) validating and interpreting the free energy profiles, and (v) computing the mean first passage time of the transition. A detailed description of the PPS↔R transition in the converter domain of myosin VI is obtained, including the transition path, the free energy along the path, and the rates of interconversion. The methodology developed here is expected to be useful more generally in studies of conformational transitions in complex biomolecules. PMID:21361558

  6. Follow-up: Prospective compound design using the 'SAR Matrix' method and matrix-derived conditional probabilities of activity.

    PubMed

    Gupta-Ostermann, Disha; Hirose, Yoichiro; Odagami, Takenao; Kouji, Hiroyuki; Bajorath, Jürgen

    2015-01-01

    In a previous Method Article, we have presented the 'Structure-Activity Relationship (SAR) Matrix' (SARM) approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a "chemical space envelope" around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach. PMID:25949808

  7. Path-Integral Renormalization Group Method for Numerical Study of Strongly Correlated Electron Systems

    NASA Astrophysics Data System (ADS)

    Imada, Masatoshi; Kashima, Tsuyoshi

    2000-09-01

    A numerical algorithm for studying strongly correlated electron systems is proposed. The groundstate wavefunction is projected out after a numerical renormalization procedure in the path integral formalism. The wavefunction is expressed from the optimized linear combination of retained states in the truncated Hilbert space with a numerically chosen basis. This algorithm does not suffer from the negative sign problem and can be applied to any type of Hamiltonian in any dimension. The efficiency is tested in examples of the Hubbard model where the basis of Slater determinants is numerically optimized. We show results on fast convergence and accuracy achieved with a small number of retained states.

  8. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    PubMed Central

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  9. Path Finder

    Energy Science and Technology Software Center (ESTSC)

    2014-01-07

    PathFinder is a graph search program, traversing a directed cyclic graph to find pathways between labeled nodes. Searches for paths through ordered sequences of labels are termed signatures. Determining the presence of signatures within one or more graphs is the primary function of Path Finder. Path Finder can work in either batch mode or interactively with an analyst. Results are limited to Path Finder whether or not a given signature is present in the graph(s).

  10. Beam splitter and method for generating equal optical path length beams

    DOEpatents

    Qian, Shinan; Takacs, Peter

    2003-08-26

    The present invention is a beam splitter for splitting an incident beam into first and second beams so that the first and second beams have a fixed separation and are parallel upon exiting. The beam splitter includes a first prism, a second prism, and a film located between the prisms. The first prism is defined by a first thickness and a first perimeter which has a first major base. The second prism is defined by a second thickness and a second perimeter which has a second major base. The film is located between the first major base and the second major base for splitting the incident beam into the first and second beams. The first and second perimeters are right angle trapezoidal shaped. The beam splitter is configured for generating equal optical path length beams.

  11. Torsional path integral Monte Carlo method for the quantum simulation of large molecules

    NASA Astrophysics Data System (ADS)

    Miller, Thomas F.; Clary, David C.

    2002-05-01

    A molecular application is introduced for calculating quantum statistical mechanical expectation values of large molecules at nonzero temperatures. The Torsional Path Integral Monte Carlo (TPIMC) technique applies an uncoupled winding number formalism to the torsional degrees of freedom in molecular systems. The internal energy of the molecules ethane, n-butane, n-octane, and enkephalin are calculated at standard temperature using the TPIMC technique and compared to the expectation values obtained using the harmonic oscillator approximation and a variational technique. All studied molecules exhibited significant quantum mechanical contributions to their internal energy expectation values according to the TPIMC technique. The harmonic oscillator approximation approach to calculating the internal energy performs well for the molecules presented in this study but is limited by its neglect of both anharmonicity effects and the potential coupling of intramolecular torsions.

  12. Toward Determining ATPase Mechanism in ABC Transporters: Development of the Reaction Path-Force Matching QM/MM Method.

    PubMed

    Zhou, Y; Ojeda-May, P; Nagaraju, M; Pu, J

    2016-01-01

    Adenosine triphosphate (ATP)-binding cassette (ABC) transporters are ubiquitous ATP-dependent membrane proteins involved in translocations of a wide variety of substrates across cellular membranes. To understand the chemomechanical coupling mechanism as well as functional asymmetry in these systems, a quantitative description of how ABC transporters hydrolyze ATP is needed. Complementary to experimental approaches, computer simulations based on combined quantum mechanical and molecular mechanical (QM/MM) potentials have provided new insights into the catalytic mechanism in ABC transporters. Quantitatively reliable determination of the free energy requirement for enzymatic ATP hydrolysis, however, requires substantial statistical sampling on QM/MM potential. A case study shows that brute force sampling of ab initio QM/MM (AI/MM) potential energy surfaces is computationally impractical for enzyme simulations of ABC transporters. On the other hand, existing semiempirical QM/MM (SE/MM) methods, although affordable for free energy sampling, are unreliable for studying ATP hydrolysis. To close this gap, a multiscale QM/MM approach named reaction path-force matching (RP-FM) has been developed. In RP-FM, specific reaction parameters for a selected SE method are optimized against AI reference data along reaction paths by employing the force matching technique. The feasibility of the method is demonstrated for a proton transfer reaction in the gas phase and in solution. The RP-FM method may offer a general tool for simulating complex enzyme systems such as ABC transporters. PMID:27498639

  13. Improved Most-Probable-Number Method To Detect Sulfate-Reducing Bacteria with Natural Media and a Radiotracer

    PubMed Central

    Vester, Flemming; Ingvorsen, Kjeld

    1998-01-01

    A greatly improved most-probable-number (MPN) method for selective enumeration of sulfate-reducing bacteria (SRB) is described. The method is based on the use of natural media and radiolabeled sulfate (35SO42−). The natural media used consisted of anaerobically prepared sterilized sludge or sediment slurries obtained from sampling sites. The densities of SRB in sediment samples from Kysing Fjord (Denmark) and activated sludge were determined by using a normal MPN (N-MPN) method with synthetic cultivation media and a tracer MPN (T-MPN) method with natural media. The T-MPN method with natural media always yielded significantly higher (100- to 1,000-fold-higher) MPN values than the N-MPN method with synthetic media. The recovery of SRB from environmental samples was investigated by simultaneously measuring sulfate reduction rates (by a 35S-radiotracer method) and bacterial counts by using the T-MPN and N-MPN methods, respectively. When bacterial numbers estimated by the T-MPN method with natural media were used, specific sulfate reduction rates (qSO42−) of 10−14 to 10−13 mol of SO42− cell−1 day−1 were calculated, which is within the range of qSO42− values previously reported for pure cultures of SRB (10−15 to 10−14 mol of SO42− cell−1 day−1). qSO42− values calculated from N-MPN values obtained with synthetic media were several orders of magnitude higher (2 × 10−10 to 7 × 10−10 mol of SO42− cell−1 day−1), showing that viable counts of SRB were seriously underestimated when standard enumeration media were used. Our results demonstrate that the use of natural media results in significant improvements in estimates of the true numbers of SRB in environmental samples. PMID:9572939

  14. Why Probability?

    ERIC Educational Resources Information Center

    Weatherly, Myra S.

    1984-01-01

    Instruction in mathematical probability to enhance higher levels of critical and creative thinking with gifted students is described. Among thinking skills developed by such an approach are analysis, synthesis, evaluation, fluency, and complexity. (CL)

  15. Location and release time identification of pollution point source in river networks based on the Backward Probability Method.

    PubMed

    Ghane, Alireza; Mazaheri, Mehdi; Mohammad Vali Samani, Jamal

    2016-09-15

    The pollution of rivers due to accidental spills is a major threat to environment and human health. To protect river systems from accidental spills, it is essential to introduce a reliable tool for identification process. Backward Probability Method (BPM) is one of the most recommended tools that is able to introduce information related to the prior location and the release time of the pollution. This method was originally developed and employed in groundwater pollution source identification problems. One of the objectives of this study is to apply this method in identifying the pollution source location and release time in surface waters, mainly in rivers. To accomplish this task, a numerical model is developed based on the adjoint analysis. Then the developed model is verified using analytical solution and some real data. The second objective of this study is to extend the method to pollution source identification in river networks. In this regard, a hypothetical test case is considered. In the later simulations, all of the suspected points are identified, using only one backward simulation. The results demonstrated that all suspected points, determined by the BPM could be a possible pollution source. The proposed approach is accurate and computationally efficient and does not need any simplification in river geometry and flow. Due to this simplicity, it is highly recommended for practical purposes. PMID:27219462

  16. A method for evaluating the expectation value of a power spectrum using the probability density function of phases

    SciTech Connect

    Caliandro, G.A.; Torres, D.F.; Rea, N. E-mail: dtorres@aliga.ieec.uab.es

    2013-07-01

    Here, we present a new method to evaluate the expectation value of the power spectrum of a time series. A statistical approach is adopted to define the method. After its demonstration, it is validated showing that it leads to the known properties of the power spectrum when the time series contains a periodic signal. The approach is also validated in general with numerical simulations. The method puts into evidence the importance that is played by the probability density function of the phases associated to each time stamp for a given frequency, and how this distribution can be perturbed by the uncertainties of the parameters in the pulsar ephemeris. We applied this method to solve the power spectrum in the case the first derivative of the pulsar frequency is unknown and not negligible. We also undertook the study of the most general case of a blind search, in which both the frequency and its first derivative are uncertain. We found the analytical solutions of the above cases invoking the sum of Fresnel's integrals squared.

  17. Hamilton-Jacobi equation for the least-action/least-time dynamical path based on fast marching method

    NASA Astrophysics Data System (ADS)

    Dey, Bijoy K.; Janicki, Marek R.; Ayers, Paul W.

    2004-10-01

    Classical dynamics can be described with Newton's equation of motion or, totally equivalently, using the Hamilton-Jacobi equation. Here, the possibility of using the Hamilton-Jacobi equation to describe chemical reaction dynamics is explored. This requires an efficient computational approach for constructing the physically and chemically relevant solutions to the Hamilton-Jacobi equation; here we solve Hamilton-Jacobi equations on a Cartesian grid using Sethian's fast marching method [J. A. Sethian, Proc. Natl. Acad. Sci. USA 93, 1591 (1996)]. Using this method, we can—starting from an arbitrary initial conformation—find reaction paths that minimize the action or the time. The method is demonstrated by computing the mechanism for two different systems: a model system with four different stationary configurations and the H+H2→H2+H reaction. Least-time paths (termed brachistochrones in classical mechanics) seem to be a suitable chioce for the reaction coordinate, allowing one to determine the key intermediates and final product of a chemical reaction. For conservative systems the Hamilton-Jacobi equation does not depend on the time, so this approach may be useful for simulating systems where important motions occur on a variety of different time scales.

  18. Methods in probability and statistical inference. Final report, June 15, 1975-June 30, 1979. [Dept. of Statistics, Univ. of Chicago

    SciTech Connect

    Wallace, D L; Perlman, M D

    1980-06-01

    This report describes the research activities of the Department of Statistics, University of Chicago, during the period June 15, 1975 to July 30, 1979. Nine research projects are briefly described on the following subjects: statistical computing and approximation techniques in statistics; numerical computation of first passage distributions; probabilities of large deviations; combining independent tests of significance; small-sample efficiencies of tests and estimates; improved procedures for simultaneous estimation and testing of many correlations; statistical computing and improved regression methods; comparison of several populations; and unbiasedness in multivariate statistics. A description of the statistical consultation activities of the Department that are of interest to DOE, in particular, the scientific interactions between the Department and the scientists at Argonne National Laboratories, is given. A list of publications issued during the term of the contract is included.

  19. Shortest Paths.

    ERIC Educational Resources Information Center

    Shore, M. L.

    1980-01-01

    There are many uses for the shortest path algorithm presented which are limited only by our ability to recognize when a problem may be converted into the shortest path in a graph representation. (Author/TG)

  20. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  1. A multiscale finite element model validation method of composite cable-stayed bridge based on Probability Box theory

    NASA Astrophysics Data System (ADS)

    Zhong, Rumian; Zong, Zhouhong; Niu, Jie; Liu, Qiqi; Zheng, Peijuan

    2016-05-01

    Modeling and simulation are routinely implemented to predict the behavior of complex structures. These tools powerfully unite theoretical foundations, numerical models and experimental data which include associated uncertainties and errors. A new methodology for multi-scale finite element (FE) model validation is proposed in this paper. The method is based on two-step updating method, a novel approach to obtain coupling parameters in the gluing sub-regions of a multi-scale FE model, and upon Probability Box (P-box) theory that can provide a lower and upper bound for the purpose of quantifying and transmitting the uncertainty of structural parameters. The structural health monitoring data of Guanhe Bridge, a composite cable-stayed bridge with large span, and Monte Carlo simulation were used to verify the proposed method. The results show satisfactory accuracy, as the overlap ratio index of each modal frequency is over 89% without the average absolute value of relative errors, and the CDF of normal distribution has a good coincidence with measured frequencies of Guanhe Bridge. The validated multiscale FE model may be further used in structural damage prognosis and safety prognosis.

  2. Computing the optimal path in stochastic dynamical systems.

    PubMed

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces. PMID:27586597

  3. Computing the optimal path in stochastic dynamical systems

    NASA Astrophysics Data System (ADS)

    Bauver, Martha; Forgoston, Eric; Billings, Lora

    2016-08-01

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensional system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.

  4. Performance evaluation of a sorbent tube sampling method using short path thermal desorption for volatile organic compounds.

    PubMed

    Peng, C Y; Batterman, S

    2000-08-01

    Air sampling, using sorbents, thermal desorption and gas chromatography, is a versatile method for identifying and quantifying trace levels of volatile organic compounds (VOCs). Thermal desorption can provide high sensitivity, appropropriate choices of sorbents and method parameters can accommodate a wide range of compounds and high humidity, and automated short-path systems can minimize artifacts, losses and carry-over effects. This study evaluates the performance of a short-path thermal desorption method for 77 VOCs using laboratory and field tests and a dual sorbent system (Tenax GR, Carbosieve SIII). Laboratory tests showed that the method requirements for ambient air sampling were easily achieved for most compounds, e.g., using the average and standard deviation across target compounds, blank emissions were < or = 0.3 ng per sorbent tube for all target compounds except benzene, toluene and phenol; the method detection limit was 0.05 +/- 0.08 ppb, reproducibility was 12 +/- 6%, linearity, as the relative standard deviation of relative response factors, was 16 +/- 9%, desorption efficiency was 99 +/- 28%, samples stored for 1-6 weeks had recoveries of 87 +/- 9%, and high humidity samples had recoveries of 102 +/- 12%. Due to sorbent, column and detector characteristics, performance was somewhat poorer for phenol groups, ketones, and nitrogen containing compounds. The laboratory results were confirmed in an analysis of replicate samples collected in two field studies that sampled ambient air along roadways and indoor air in a large office building. Replicates collected under field conditions demonstrated good agreement except for very low concentrations or large (> 41 volume) samples of high humidity air. Overall, the method provides excellent performance and satisfactory throughput for many applications. PMID:11249785

  5. A numerical scheme for optimal transition paths of stochastic chemical kinetic systems

    NASA Astrophysics Data System (ADS)

    Liu, Di

    2008-10-01

    We present a new framework for finding the optimal transition paths of metastable stochastic chemical kinetic systems with large system size. The optimal transition paths are identified to be the most probable paths according to the Large Deviation Theory of stochastic processes. Dynamical equations for the optimal transition paths are derived using the variational principle. A modified Minimum Action Method (MAM) is proposed as a numerical scheme to solve the optimal transition paths. Applications to Gene Regulatory Networks such as the toggle switch model and the Lactose Operon Model in Escherichia coli are presented as numerical examples.

  6. "Albedo dome": a method for measuring spectral flux-reflectance in a laboratory for media with long optical paths.

    PubMed

    Light, Bonnie; Carns, Regina C; Warren, Stephen G

    2015-06-10

    A method is presented for accurate measurement of spectral flux-reflectance (albedo) in a laboratory, for media with long optical path lengths, such as snow and ice. The approach uses an acrylic hemispheric dome, which, when placed over the surface being studied, serves two functions: (i) it creates an overcast "sky" to illuminate the target surface from all directions within a hemisphere, and (ii) serves as a platform for measuring incident and backscattered spectral radiances, which can be integrated to obtain fluxes. The fluxes are relative measurements and because their ratio is used to determine flux-reflectance, no absolute radiometric calibrations are required. The dome and surface must meet minimum size requirements based on the scattering properties of the surface. This technique is suited for media with long photon path lengths since the backscattered illumination is collected over a large enough area to include photons that reemerge from the domain far from their point of entry because of multiple scattering and small absorption. Comparison between field and laboratory albedo of a portable test surface demonstrates the viability of this method. PMID:26192823

  7. Methods for estimating annual exceedance probability discharges for streams in Arkansas, based on data through water year 2013

    USGS Publications Warehouse

    Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.

    2016-01-01

    In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization

  8. Start and Stop Rules for Exploratory Path Analysis.

    ERIC Educational Resources Information Center

    Shipley, Bill

    2002-01-01

    Describes a method for choosing rejection probabilities for the tests of independence that are used in constraint-based algorithms of exploratory path analysis. The method consists of generating a Markov or semi-Markov model from the equivalence class represented by a partial ancestral graph and then testing the d-separation implications. (SLD)

  9. A sequential method for passive detection, characterization, and localization of multiple low probability of intercept LFMCW signals

    NASA Astrophysics Data System (ADS)

    Hamschin, Brandon M.

    A method for passive Detection, Characterization, and Localization (DCL) of multiple low power, Linear Frequency Modulated Continuous Wave (LFMCW) (i.e., Low Probability of Intercept (LPI)) signals is proposed. We demonstrate, via simulation, laboratory, and outdoor experiments, that the method is able to detect and correctly characterize the parameters that define two simultaneous LFMCW signals with probability greater than 90% when the signal to noise ratio is -10 dB or greater. While this performance is compelling, it is far from the Cramer-Rao Lower Bound (CRLB), which we derive, and the performance of the Maximum Likelihood Estimator (MLE), whose performance we simulate. The loss in performance relative to the CRLB and the MLE is the price paid for computational tractability. The LFMCW signal is the focus of this work because of its common use in modern, low-cost radar systems. In contrast to other detection and characterization approaches, such as the MLE and those based on the Wigner-Ville Transform (WVT) or the Wigner-Ville Hough Transform (WVHT), our approach does not begin with a parametric model of the received signal that is specified directly in terms of its LFMCW constituents. Rather, we analyze the signal over time intervals that are short, non-overlapping, and contiguous by modeling it within these intervals as a sum of a small number sinusoidal (i.e., harmonic) components with unknown frequencies, deterministic but unknown amplitudes, unknown order (i.e., number of harmonic components), and unknown noise autocorrelation function. It is this model of the data that makes the solution computationally feasible, but also what leads to a degradation in performance since estimates are not based on the full time series. By modeling the signal in this way, we reliably detect the presence of multiple LFMCW signals in colored noise without the need for prewhitening, efficiently estimate (i.e. , characterize) their parameters, provide estimation error

  10. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  11. Evaluating open-path FTIR spectrometer data using different quantification methods, libraries, and background spectra obtained under varying environmental conditions

    SciTech Connect

    Tomasko, M.S.

    1995-12-31

    Studies were performed to evaluate the accuracy of open-path Fourier Transform Infrared (OP-FTIR) spectrometers using a 35 foot outdoor exposure chamber in Pittsboro, North Carolina. Results obtained with the OP-FTIR spectrometer were compared to results obtained with a reference method (a gas chromatograph equipped with a flame ionization detector, GC-FID). Concentration results were evaluated in terms of the mathematical methods and spectral libraries used for quantification. In addition, the research investigated the effect on quantification of using different backgrounds obtained at various times during the day. The chemicals used in this study were toluene, cyclohexane, and methanol; and these were evaluated over the concentration range of 5-30 ppm.

  12. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method

    PubMed Central

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  13. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remot...

  14. A path-independent method for barrier option pricing in hidden Markov models

    NASA Astrophysics Data System (ADS)

    Rashidi Ranjbar, Hedieh; Seifi, Abbas

    2015-12-01

    This paper presents a method for barrier option pricing under a Black-Scholes model with Markov switching. We extend the option pricing method of Buffington and Elliott to price continuously monitored barrier options under a Black-Scholes model with regime switching. We use a regime switching random Esscher transform in order to determine an equivalent martingale pricing measure, and then solve the resulting multidimensional integral for pricing barrier options. We have calculated prices for down-and-out call options under a two-state hidden Markov model using two different Monte-Carlo simulation approaches and the proposed method. A comparison of the results shows that our method is faster than Monte-Carlo simulation methods.

  15. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    NASA Astrophysics Data System (ADS)

    Zeng, Xiancheng; Hu, Hao; Hu, Xiangqian; Yang, Weitao

    2009-04-01

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids "on-the-fly" QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  16. Calculating solution redox free energies with ab initio quantum mechanical/molecular mechanical minimum free energy path method

    SciTech Connect

    Zeng Xiancheng; Hu Hao; Hu Xiangqian; Yang Weitao

    2009-04-28

    A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids 'on-the-fly' QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.

  17. Fractional Levy motion through path integrals

    SciTech Connect

    Calvo, Ivan; Sanchez, Raul; Carreras, Benjamin A

    2009-01-01

    Fractional Levy motion (fLm) is the natural generalization of fractional Brownian motion in the context of self-similar stochastic processes and stable probability distributions. In this paper we give an explicit derivation of the propagator of fLm by using path integral methods. The propagators of Brownian motion and fractional Brownian motion are recovered as particular cases. The fractional diffusion equation corresponding to fLm is also obtained.

  18. Characterizing magnetic resonance signal decay due to Gaussian diffusion: the path integral approach and a convenient computational method

    PubMed Central

    Özarslan, Evren; Westin, Carl-Fredrik; Mareci, Thomas H.

    2016-01-01

    The influence of Gaussian diffusion on the magnetic resonance signal is determined by the apparent diffusion coefficient (ADC) and tensor (ADT) of the diffusing fluid as well as the gradient waveform applied to sensitize the signal to diffusion. Estimations of ADC and ADT from diffusion-weighted acquisitions necessitate computations of, respectively, the b-value and b-matrix associated with the employed pulse sequence. We establish the relationship between these quantities and the gradient waveform by expressing the problem as a path integral and explicitly evaluating it. Further, we show that these important quantities can be conveniently computed for any gradient waveform using a simple algorithm that requires a few lines of code. With this representation, our technique complements the multiple correlation function (MCF) method commonly used to compute the effects of restricted diffusion, and provides a consistent and convenient framework for studies that aim to infer the microstructural features of the specimen. PMID:27182208

  19. A Shortest-Path-Based Method for the Analysis and Prediction of Fruit-Related Genes in Arabidopsis thaliana

    PubMed Central

    Su, Fangchu; Chen, Lei; Huang, Tao; Cai, Yu-Dong

    2016-01-01

    Biologically, fruits are defined as seed-bearing reproductive structures in angiosperms that develop from the ovary. The fertilization, development and maturation of fruits are crucial for plant reproduction and are precisely regulated by intrinsic genetic regulatory factors. In this study, we used Arabidopsis thaliana as a model organism and attempted to identify novel genes related to fruit-associated biological processes. Specifically, using validated genes, we applied a shortest-path-based method to identify several novel genes in a large network constructed using the protein-protein interactions observed in Arabidopsis thaliana. The described analyses indicate that several of the discovered genes are associated with fruit fertilization, development and maturation in Arabidopsis thaliana. PMID:27434024

  20. RSS-Based Method for Sensor Localization with Unknown Transmit Power and Uncertainty in Path Loss Exponent.

    PubMed

    Huang, Jiyan; Liu, Peng; Lin, Wei; Gui, Guan

    2016-01-01

    The localization of a sensor in wireless sensor networks (WSNs) has now gained considerable attention. Since the transmit power and path loss exponent (PLE) are two critical parameters in the received signal strength (RSS) localization technique, many RSS-based location methods, considering the case that both the transmit power and PLE are unknown, have been proposed in the literature. However, these methods require a search process, and cannot give a closed-form solution to sensor localization. In this paper, a novel RSS localization method with a closed-form solution based on a two-step weighted least squares estimator is proposed for the case with the unknown transmit power and uncertainty in PLE. Furthermore, the complete performance analysis of the proposed method is given in the paper. Both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The relationships between the deterministic CRLB and the proposed stochastic CRLB are presented. The paper also proves that the proposed method can reach the stochastic CRLB. PMID:27618055

  1. A Method to Estimate the Probability That Any Individual Lightning Stroke Contacted the Surface Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William; Merceret, Francis J.

    2010-01-01

    A technique has been developed to calculate the probability that any nearby lightning stroke is within any radius of any point of interest. In practice, this provides the probability that a nearby lightning stroke was within a key distance of a facility, rather than the error ellipses centered on the stroke. This process takes the current bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to get the probability that the stroke is inside any specified radius. This new facility-centric technique will be much more useful to the space launch customers and may supersede the lightning error ellipse approach discussed in [5], [6].

  2. Systematic methods to enhance diversity knowledge gained: a proposed path to professional richness.

    PubMed

    Scisney-Matlock, M

    2000-01-01

    Faculty members in nursing schools have the responsibility to provide learning experiences for students that have the potential to solidly link indicators of professional richness, knowledge gained about diversity, enhanced critical thinking skills, ethical reasoning and decision-making. The challenge then is for faculty members to implement systematic methods to ensure that students are exposed to in- and out-of classroom experiences that result in measurable outcomes indicating a strong association between what a student is expected to learn and the diversity knowledge gained. The purpose of this study is to describe the effect of a target intervention derived from synthesizing multiple theories (social, cognitive, moral and ethical) and learning experiences on students' perception of global diversity. Two independent cohorts of senior nursing students, (control group, N = 65 and experimental group, N = 55) were taught a required undergraduate course: Societal Health Issues (SHI) with the same objectives over two consecutive years. Drastic and remarkable differences were demonstrated using separate multiple regression analyses of the experimental group (adjusted R2 change of 47% (F = 11.123, (df1 = 4; df2 = 47), p = .000) and control group (adjusted R2 change of 1% (F = 1.36, (df1 = 4; df2 = 52), p = .259). This study suggests preliminary empirical support for exploring how conceptual frameworks may guide faculty to consider pushing beyond traditional teaching methods to develop and organize systematic methods for infusing diversity content in courses. PMID:11249260

  3. Classifying heterogeneity of spontaneous up-states: a method for revealing variations in firing probability, engaged neurons and Fano factor.

    PubMed

    Gullo, Francesca; Maffezzoli, Andrea; Dossi, Elena; Lecchi, Marzia; Wanke, Enzo

    2012-01-30

    The dynamics of spontaneous and sensory-evoked up-states have been recently compared, in multi-site recordings in vivo and found to have similarities and differences. Also in vitro, this is evident because we here describe a novel computational method to classify into statistically different states the spontaneous reverberating activity recorded from long-term (12-18 days-in vitro) cultured cortical neurons (from 60-site multi-electrode arrays, MEA). State classification was performed by spike number time histograms (SNTH, or other burst features) of excitatory and inhibitory neuron clusters and revealed that in novel identified states the number of engaged neurons or up-state duration can change. To improve the characterization of each state we also computed the firing spike histograms (FSH) which revealed a new facet of the firing probability of clusters. In exemplary functional experiments we show that: (i) up to 6-7 states can be safely categorized during several hours of recordings without observing spike rate changes, (ii) they disappear after a short pharmacological stimulation being replaced with novel states active and living up to 6-8 h, (iii) antagonists in the nM range can split the activity of a homogeneous network into the chronological coexistence of 2 states, one completely different and one not significantly different from control state. In conclusion, we believe that this novel procedure better characterizes the number of functional states of a network and opens up the possibility of predicting the elementary "vocabulary" used by small networks of neurons. PMID:22037594

  4. Contribution analysis of bus pass-by noise based on dynamic transfer path method

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Zheng, Sifa; Hao, Peng; Lian, Xiaomin

    2011-10-01

    Bus pass-by noise has become one of the main noise sources which seriously disturb the mental and physical health of urban residents. The key of reducing bus noise is to identify major noise source. In this paper the dynamic transfer characteristic model in the process of bus acceleration is established, which can quantitatively describe the relationship between the sound source or vibration source of the vehicle and the response points outside the vehicle; also a test method has been designed, which can quickly and easily identify the contribution of the bus pass-by noise. Experimental results show that the dynamic transfer characteristic model can identify the main noise source and their contribution during the acceleration, which has significance for the bus noise reduction.

  5. Gravity-dependent signal path variation in a large VLBI telescope modelled with a combination of surveying methods

    NASA Astrophysics Data System (ADS)

    Sarti, Pierguido; Abbondanza, C.; Vittuari, L.

    2009-11-01

    The very long baseline interferometry (VLBI) antenna in Medicina (Italy) is a 32-m AZ-EL mount that was surveyed several times, adopting an indirect method, for the purpose of estimating the eccentricity vector between the co-located VLBI and Global Positioning System instruments. In order to fulfill this task, targets were located in different parts of the telescope’s structure. Triangulation and trilateration on the targets highlight a consistent amount of deformation that biases the estimate of the instrument’s reference point up to 1 cm, depending on the targets’ locations. Therefore, whenever the estimation of accurate local ties is needed, it is critical to take into consideration the action of gravity on the structure. Furthermore, deformations induced by gravity on VLBI telescopes may modify the length of the path travelled by the incoming radio signal to a non-negligible extent. As a consequence, differently from what it is usually assumed, the relative distance of the feed horn’s phase centre with respect to the elevation axis may vary, depending on the telescope’s pointing elevation. The Medicina telescope’s signal path variation Δ L increases by a magnitude of approximately 2 cm, as the pointing elevation changes from horizon to zenith; it is described by an elevation-dependent second-order polynomial function computed as, according to Clark and Thomsen (Techical report, 100696, NASA, Greenbelt, 1988), a linear combination of three terms: receiver displacement Δ R, primary reflector’s vertex displacement Δ V and focal length variations Δ F. Δ L was investigated with a combination of terrestrial triangulation and trilateration, laser scanning and a finite element model of the antenna. The antenna gain (or auto-focus curve) Δ G is routinely determined through astronomical observations. A surprisingly accurate reproduction of Δ G can be obtained with a combination of Δ V, Δ F and Δ R.

  6. Methods for assessing movement path recursion with application to African buffalo in South Africa

    USGS Publications Warehouse

    Bar-David, S.; Bar-David, I.; Cross, P.C.; Ryan, S.J.; Knechtel, C.U.; Getz, W.M.

    2009-01-01

    Recent developments of automated methods for monitoring animal movement, e.g., global positioning systems (GPS) technology, yield high-resolution spatiotemporal data. To gain insights into the processes creating movement patterns, we present two new techniques for extracting information from these data on repeated visits to a particular site or patch ("recursions"). Identification of such patches and quantification of recursion pathways, when combined with patch-related ecological data, should contribute to our understanding of the habitat requirements of large herbivores, of factors governing their space-use patterns, and their interactions with the ecosystem. We begin by presenting output from a simple spatial model that simulates movements of large-herbivore groups based on minimal parameters: resource availability and rates of resource recovery after a local depletion. We then present the details of our new techniques of analyses (recursion analysis and circle analysis) and apply them to data generated by our model, as well as two sets of empirical data on movements of African buffalo (Syncerus coffer): the first collected in Klaserie Private Nature Reserve and the second in Kruger National Park, South Africa. Our recursion analyses of model outputs provide us with a basis for inferring aspects of the processes governing the production of buffalo recursion patterns, particularly the potential influence of resource recovery rate. Although the focus of our simulations was a comparison of movement patterns produced by different resource recovery rates, we conclude our paper with a comprehensive discussion of how recursion analyses can be used when appropriate ecological data are available to elucidate various factors influencing movement. Inter alia, these include the various limiting and preferred resources, parasites, and topographical and landscape factors. ?? 2009 by the Ecological Society of America.

  7. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  8. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  9. Source Apportionment of the Anthropogenic Increment to Ozone, Formaldehyde, and Nitrogen Dioxide by the Path-Integral Method in a 3D Model.

    PubMed

    Dunker, Alan M; Koo, Bonyoung; Yarwood, Greg

    2015-06-01

    The anthropogenic increment of a species is the difference in concentration between a base-case simulation with all emissions included and a background simulation without the anthropogenic emissions. The Path-Integral Method (PIM) is a new technique that can determine the contributions of individual anthropogenic sources to this increment. The PIM was applied to a simulation of O3 formation in July 2030 in the U.S., using the Comprehensive Air Quality Model with Extensions and assuming advanced controls on light-duty vehicles (LDVs) and other sources. The PIM determines the source contributions by integrating first-order sensitivity coefficients over a range of emissions, a path, from the background case to the base case. There are many potential paths, with each representing a specific emission-control strategy leading to zero anthropogenic emissions, i.e., controlling all sources together versus controlling some source(s) preferentially are different paths. Three paths were considered, and the O3, formaldehyde, and NO2 anthropogenic increments were apportioned to five source categories. At rural and urban sites in the eastern U.S. and for all three paths, point sources typically have the largest contribution to the O3 and NO2 anthropogenic increments, and either LDVs or area sources, the smallest. Results for formaldehyde are more complex. PMID:25938820

  10. A Method to Estimate the Probability that any Individual Cloud-to-Ground Lightning Stroke was Within any Radius of any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud to ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even with the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force Station. Future applications could include forensic meteorology.

  11. A Method to Estimate the Probability That Any Individual Cloud-to-Ground Lightning Stroke Was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa L.; Roeder, William P.; Merceret, Francis J.

    2010-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station.

  12. An analysis of quantum effects on the thermodynamic properties of cryogenic hydrogen using the path integral method.

    PubMed

    Nagashima, H; Tsuda, S; Tsuboi, N; Koshi, M; Hayashi, K A; Tokumasu, T

    2014-04-01

    In this paper, we describe the analysis of the thermodynamic properties of cryogenic hydrogen using classical molecular dynamics (MD) and path integral MD (PIMD) method to understand the effects of the quantum nature of hydrogen molecules. We performed constant NVE MD simulations across a wide density-temperature region to establish an equation of state (EOS). Moreover, the quantum effect on the difference of molecular mechanism of pressure-volume-temperature relationship was addressed. The EOS was derived based on the classical mechanism idea only using the MD simulation results. Simulation results were compared with each MD method and experimental data. As a result, it was confirmed that although the EOS on the basis of classical MD cannot reproduce the experimental data of saturation property of hydrogen in the high-density region, the EOS on the basis of PIMD well reproduces those thermodynamic properties of hydrogen. Moreover, it was clarified that taking quantum effects into account makes the repulsion force larger and the potential well shallower. Because of this mechanism, the intermolecular interaction of hydrogen molecules diminishes and the virial pressure increases. PMID:24712800

  13. The lead-lag relationship between stock index and stock index futures: A thermal optimal path method

    NASA Astrophysics Data System (ADS)

    Gong, Chen-Chen; Ji, Shen-Dan; Su, Li-Ling; Li, Sai-Ping; Ren, Fei

    2016-02-01

    The study of lead-lag relationship between stock index and stock index futures is of great importance for its wide application in hedging and portfolio investments. Previous works mainly use conventional methods like Granger causality test, GARCH model and error correction model, and focus on the causality relation between the index and futures in a certain period. By using a non-parametric approach-thermal optimal path (TOP) method, we study the lead-lag relationship between China Securities Index 300 (CSI 300), Hang Seng Index (HSI), Standard and Poor 500 (S&P 500) Index and their associated futures to reveal the variance of their relationship over time. Our finding shows evidence of pronounced futures leadership for well established index futures, namely HSI and S&P 500 index futures, while index of developing market like CSI 300 has pronounced leadership. We offer an explanation based on the measure of an indicator which quantifies the differences between spot and futures prices for the surge of lead-lag function. Our results provide new perspectives for the understanding of the dynamical evolution of lead-lag relationship between stock index and stock index futures, which is valuable for the study of market efficiency and its applications.

  14. An analysis of quantum effects on the thermodynamic properties of cryogenic hydrogen using the path integral method

    SciTech Connect

    Nagashima, H.; Tsuda, S.; Tsuboi, N.; Koshi, M.; Hayashi, K. A.; Tokumasu, T.

    2014-04-07

    In this paper, we describe the analysis of the thermodynamic properties of cryogenic hydrogen using classical molecular dynamics (MD) and path integral MD (PIMD) method to understand the effects of the quantum nature of hydrogen molecules. We performed constant NVE MD simulations across a wide density–temperature region to establish an equation of state (EOS). Moreover, the quantum effect on the difference of molecular mechanism of pressure–volume–temperature relationship was addressed. The EOS was derived based on the classical mechanism idea only using the MD simulation results. Simulation results were compared with each MD method and experimental data. As a result, it was confirmed that although the EOS on the basis of classical MD cannot reproduce the experimental data of saturation property of hydrogen in the high-density region, the EOS on the basis of PIMD well reproduces those thermodynamic properties of hydrogen. Moreover, it was clarified that taking quantum effects into account makes the repulsion force larger and the potential well shallower. Because of this mechanism, the intermolecular interaction of hydrogen molecules diminishes and the virial pressure increases.

  15. NMR signal for particles diffusing under potentials: From path integrals and numerical methods to a model of diffusion anisotropy

    NASA Astrophysics Data System (ADS)

    Yolcu, Cem; Memiç, Muhammet; Şimşek, Kadir; Westin, Carl-Fredrik; Özarslan, Evren

    2016-05-01

    We study the influence of diffusion on NMR experiments when the molecules undergo random motion under the influence of a force field and place special emphasis on parabolic (Hookean) potentials. To this end, the problem is studied using path integral methods. Explicit relationships are derived for commonly employed gradient waveforms involving pulsed and oscillating gradients. The Bloch-Torrey equation, describing the temporal evolution of magnetization, is modified by incorporating potentials. A general solution to this equation is obtained for the case of parabolic potential by adopting the multiple correlation function (MCF) formalism, which has been used in the past to quantify the effects of restricted diffusion. Both analytical and MCF results were found to be in agreement with random walk simulations. A multidimensional formulation of the problem is introduced that leads to a new characterization of diffusion anisotropy. Unlike the case of traditional methods that employ a diffusion tensor, anisotropy originates from the tensorial force constant, and bulk diffusivity is retained in the formulation. Our findings suggest that some features of the NMR signal that have traditionally been attributed to restricted diffusion are accommodated by the Hookean model. Under certain conditions, the formalism can be envisioned to provide a viable approximation to the mathematically more challenging restricted diffusion problems.

  16. Statistical methods to quantify the effect of mite parasitism on the probability of death in honey bee colonies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Varroa destructor is a mite parasite of European honey bees, Apis mellifera, that weakens the population, can lead to the death of an entire honey bee colony, and is believed to be the parasite with the most economic impact on beekeeping. The purpose of this study was to estimate the probability of ...

  17. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  18. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset.

    PubMed

    Zhang, Haitao; Chen, Zewei; Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users' privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  19. Comparing laser-based open- and closed-path gas analyzers to measure methane fluxes using the eddy covariance method

    USGS Publications Warehouse

    Detto, M.; Verfaillie, J.; Anderson, F.; Xu, L.; Baldocchi, D.

    2011-01-01

    Closed- and open-path methane gas analyzers are used in eddy covariance systems to compare three potential methane emitting ecosystems in the Sacramento-San Joaquin Delta (CA, USA): a rice field, a peatland pasture and a restored wetland. The study points out similarities and differences of the systems in field experiments and data processing. The closed-path system, despite a less intrusive placement with the sonic anemometer, required more care and power. In contrast, the open-path system appears more versatile for a remote and unattended experimental site. Overall, the two systems have comparable minimum detectable limits, but synchronization between wind speed and methane data, air density corrections and spectral losses have different impacts on the computed flux covariances. For the closed-path analyzer, air density effects are less important, but the synchronization and spectral losses may represent a problem when fluxes are small or when an undersized pump is used. For the open-path analyzer air density corrections are greater, due to spectroscopy effects and the classic Webb-Pearman-Leuning correction. Comparison between the 30-min fluxes reveals good agreement in terms of magnitudes between open-path and closed-path flux systems. However, the scatter is large, as consequence of the intensive data processing which both systems require. ?? 2011.

  20. Path Pascal

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Kolstad, R. B.; Holle, D. F.; Miller, T. J.; Krause, P.; Horton, K.; Macke, T.

    1983-01-01

    Path Pascal is high-level experimental programming language based on PASCAL, which incorporates extensions for systems and real-time programming. Pascal is extended to treat real-time concurrent systems.

  1. A Random Walk on a Circular Path

    ERIC Educational Resources Information Center

    Ching, W.-K.; Lee, M. S.

    2005-01-01

    This short note introduces an interesting random walk on a circular path with cards of numbers. By using high school probability theory, it is proved that under some assumptions on the number of cards, the probability that a walker will return to a fixed position will tend to one as the length of the circular path tends to infinity.

  2. Nano-structural analysis of effective transport paths in fuel-cell catalyst layers by using stochastic material network methods

    NASA Astrophysics Data System (ADS)

    Shin, Seungho; Kim, Ah-Reum; Um, Sukkee

    2016-02-01

    A two-dimensional material network model has been developed to visualize the nano-structures of fuel-cell catalysts and to search for effective transport paths for the optimal performance of fuel cells in randomly-disordered composite catalysts. Stochastic random modeling based on the Monte Carlo method is developed using random number generation processes over a catalyst layer domain at a 95% confidence level. After the post-determination process of the effective connectivity, particularly for mass transport, the effective catalyst utilization factors are introduced to determine the extent of catalyst utilization in the fuel cells. The results show that the superficial pore volume fractions of 600 trials approximate a normal distribution curve with a mean of 0.5. In contrast, the estimated volume fraction of effectively inter-connected void clusters ranges from 0.097 to 0.420, which is much smaller than the superficial porosity of 0.5 before the percolation process. Furthermore, the effective catalyst utilization factor is determined to be linearly proportional to the effective porosity. More importantly, this study reveals that the average catalyst utilization is less affected by the variations of the catalyst's particle size and the absolute catalyst loading at a fixed volume fraction of void spaces.

  3. Determination of the inelastic mean free path of electrons in polythiophenes using elastic peak electron spectroscopy method

    NASA Astrophysics Data System (ADS)

    Lesiak, B.; Kosinski, A.; Jablonski, A.; Kövér, L.; Tóth, J.; Varga, D.; Cserny, I.; Zagorska, M.; Kulszewicz-Bajer, I.; Gergely, G.

    2001-04-01

    The inelastic mean free path (IMFP) is an important parameter for quantitative surface characterisation by Auger electron spectroscopy, X-ray photoelectron spectroscopy or electron energy loss spectroscopy. An extensive database of the IMFPs for selected elements, inorganic and organic compounds has been recently published by Powell and Jablonski. As it follows from this compilation, the published material on IMFPs for conductive polymers is very limited. Selected polymers, such as polyacetylenes and polyanilines, have been investigated only recently. The present study is a continuation of the research on IMFPs determination in conductive polymers using the elastic peak electron spectroscopy (EPES) method. In the present study three polythiophene samples have been studied using high energy resolution spectrometer and two standards: Ni and Ag. The resulting experimental IMFPs are compared to the respective IMFP values determined using the predictive formulae proposed by Tanuma and Powell (TPP-2M) and by Gries (G1), showing a good agreement. The scatter between the experimental and predicted IMFPs in polythiophenes is evaluated. The statistical and systematic errors, their sources and the possible contributions to the systematic error due to influence of the accuracy of the input parameters, such as the surface composition and density, on the IMFPs derived from the experiments and Monte Carlo calculations, are extensively discussed.

  4. Small-scale spatial variations of gaseous air pollutants - A comparison of path-integrated and in situ measurement methods

    NASA Astrophysics Data System (ADS)

    Ling, Hong; Schäfer, Klaus; Xin, Jinyuan; Qin, Min; Suppan, Peter; Wang, Yuesi

    2014-08-01

    Traffic emissions are a very important factor in Beijing's urban air quality. To investigate small-scale spatial variations in air pollutants, a campaign was carried out from April 2009 through March 2011 in Beijing. DOAS (differential optical absorption spectroscopy) systems and in situ instruments were used. Atmospheric NO, NO2, O3 and SO2 mixing ratios were monitored. Meanwhile, HCHO mixing ratios were measured by two different DOAS systems. Diurnal variations of these mixing ratios were analysed. Differences between the path-integrated and in situ measurements were investigated based on the results from the campaign. The influences of different weather situations, dilution conditions and light-path locations were investigated as well. The results show that the differences between path-integrated and in situ mixing ratios were affected by combinations of emission source strengths, weather conditions, chemical transformations and local convection. Path-integrated measurements satisfy the requirements of traffic emission investigations better than in situ measurements.

  5. Probability tree algorithm for general diffusion processes

    NASA Astrophysics Data System (ADS)

    Ingber, Lester; Chen, Colleen; Mondescu, Radu Paul; Muzzall, David; Renedo, Marco

    2001-11-01

    Motivated by path-integral numerical solutions of diffusion processes, PATHINT, we present a tree algorithm, PATHTREE, which permits extremely fast accurate computation of probability distributions of a large class of general nonlinear diffusion processes.

  6. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational-rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    NASA Astrophysics Data System (ADS)

    Mielke, Steven L.; Truhlar, Donald G.

    2015-01-01

    We present an improved version of our "path-by-path" enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P-6) to O(P-12), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational-rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan-Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ˜1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300-3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  7. Analysis of Demagnetization Path of A Multiple-component Nrm: Testing of The Present Method and A New Data-inversion Approach

    NASA Astrophysics Data System (ADS)

    Man, O.

    The particular componets of NRM, formed by various processes during the geological history, can be distinguished by the progressive demagnetization technique provided that they differ from one another in the blocking temperature distributions. A sequence of three-dimensional vectors (assigned to fixed values of blocking temperature) result- ing from this experiment is called the demagnetization path. The present method of its analysis, consisting in finding its linear segments, is tested in the present paper to- gether with the techniques suggested herein. The following mathematical model of the demagnetization path is used for this purpose: Each component of NRM is supposed to be described by a vector of magnetization and a function of distribution of blocking temperature; two types of distribution are considered, namely the gamma distribution and the normal one. Both types are defined by two parameters and, therefore, each component of NRM is described by five parameters. Moreover, the model of demag- netization path is completed by superimposing Gaussean white noise (representing, e.g., the measurement errors) on the above model of NRM. The tests prove that the present method of identification of NRM components fails if either the distributions of blocking temperature overlap one another or the noise level is too high. The suggested method of analysis of the demagnetization path consists in maximum likelihood esti- mation of all parameters describing the NRM model, i.e., in an iterative minimization of deflections of this model from the demagnetization path. The minimization yields a correct estimate of parameters on condition that their initial guess is sufficiently close to their actual values. The initial guess may be found by formal smoothing of the demagnetization path by a spline function.

  8. A Didactic Proposed for Teaching the Concepts of Electrons and Light in Secondary School Using Feynman's Path Sum Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Arlego, Marcelo; Otero, Maria Rita

    2012-01-01

    This work comprises an investigation about basic Quantum Mechanics (QM) teaching in the high school. The organization of the concepts does not follow a historical line. The Path Integrals method of Feynman has been adopted as a Reference Conceptual Structure that is an alternative to the canonical formalism. We have designed a didactic sequence…

  9. Incorporating seasonality into event-based joint probability methods for predicting flood frequency: A hybrid causative event approach

    NASA Astrophysics Data System (ADS)

    Li, Jing; Thyer, Mark; Lambert, Martin; Kuzera, George; Metcalfe, Andrew

    2016-02-01

    Flood extremes are driven by highly variable and complex climatic and hydrological processes. Observational evidence has identified that seasonality of climate variables has a major impact on flood peaks. However, event-based joint probability approaches for predicting the flood frequency distribution (FFD), which are commonly used in practice, do not commonly incorporate climate seasonality. This study presents an advance in event-based joint probability approaches by incorporating seasonality using the hybrid causative events (HCE) approach. The HCE was chosen because it uses the true causative events of the floods of interest and is able to combine the accuracy of continuous simulation with the computational efficiency of event-based approaches. The incorporation of seasonality is evaluated using a virtual catchment approach at eight sites over a wide range of Australian climate zones, including tropical, temperature, Mediterranean and desert climates (virtual catchment data for the eight sites is freely available via digital repository). The seasonal HCE provided accurate predictions of the FFD at all sites. In contrast, the non-seasonal HCE significantly over-predicted the FFD at some sites. The need to include seasonality was influenced by the magnitude of the seasonal variation in soil moisture and its coherence with the seasonal variation in extreme rainfall. For sites with a low seasonal variation in soil moisture the non-seasonal HCE provided reliable estimates of the FFD. For the remaining sites, it was found difficult to predict a priori whether ignoring seasonality provided a reliable estimate of the FFD, hence it is recommended that the seasonal HCE always be used. The practical implications of this study are that the HCE approach with seasonality is an accurate and efficient event-based joint probability approach to derive the flood frequency distribution across a wide range of climatologies.

  10. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  11. Emptiness Formation Probability

    NASA Astrophysics Data System (ADS)

    Crawford, Nicholas; Ng, Stephen; Starr, Shannon

    2016-08-01

    We present rigorous upper and lower bounds on the emptiness formation probability for the ground state of a spin-1/2 Heisenberg XXZ quantum spin system. For a d-dimensional system we find a rate of decay of the order {exp(-c L^{d+1})} where L is the sidelength of the box in which we ask for the emptiness formation event to occur. In the {d=1} case this confirms previous predictions made in the integrable systems community, though our bounds do not achieve the precision predicted by Bethe ansatz calculations. On the other hand, our bounds in the case {d ≥ 2} are new. The main tools we use are reflection positivity and a rigorous path integral expansion, which is a variation on those previously introduced by Toth, Aizenman-Nachtergaele and Ueltschi.

  12. Methods for estimating annual exceedance-probability discharges for streams in Iowa, based on data through water year 2010

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; Veilleux, Andrea G.

    2013-01-01

    A statewide study was performed to develop regional regression equations for estimating selected annual exceedance-probability statistics for ungaged stream sites in Iowa. The study area comprises streamgages located within Iowa and 50 miles beyond the State’s borders. Annual exceedance-probability estimates were computed for 518 streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data through 2010. The estimation of the selected statistics included a Bayesian weighted least-squares/generalized least-squares regression analysis to update regional skew coefficients for the 518 streamgages. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low flows. Also, geographic information system software was used to measure 59 selected basin characteristics for each streamgage. Regional regression analysis, using generalized least-squares regression, was used to develop a set of equations for each flood region in Iowa for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. A total of 394 streamgages were included in the development of regional regression equations for three flood regions (regions 1, 2, and 3) that were defined for Iowa based on landform regions and soil regions. Average standard errors of prediction range from 31.8 to 45.2 percent for flood region 1, 19.4 to 46.8 percent for flood region 2, and 26.5 to 43.1 percent for flood region 3. The pseudo coefficients of determination for the generalized least-squares equations range from 90.8 to 96.2 percent for flood region 1, 91.5 to 97

  13. Efficient methods for including quantum effects in Monte Carlo calculations of large systems: Extension of the displaced points path integral method and other effective potential methods to calculate properties and distributions

    NASA Astrophysics Data System (ADS)

    Mielke, Steven L.; Dinpajooh, Mohammadhasan; Siepmann, J. Ilja; Truhlar, Donald G.

    2013-01-01

    We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.

  14. Improved methods for Feynman path integral calculations and their application to calculate converged vibrational–rotational partition functions, free energies, enthalpies, entropies, and heat capacities for methane

    SciTech Connect

    Mielke, Steven L. E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2015-01-28

    We present an improved version of our “path-by-path” enhanced same path extrapolation scheme for Feynman path integral (FPI) calculations that permits rapid convergence with discretization errors ranging from O(P{sup −6}) to O(P{sup −12}), where P is the number of path discretization points. We also present two extensions of our importance sampling and stratified sampling schemes for calculating vibrational–rotational partition functions by the FPI method. The first is the use of importance functions for dihedral angles between sets of generalized Jacobi coordinate vectors. The second is an extension of our stratification scheme to allow some strata to be defined based only on coordinate information while other strata are defined based on both the geometry and the energy of the centroid of the Feynman path. These enhanced methods are applied to calculate converged partition functions by FPI methods, and these results are compared to ones obtained earlier by vibrational configuration interaction (VCI) calculations, both calculations being for the Jordan–Gilbert potential energy surface. The earlier VCI calculations are found to agree well (within ∼1.5%) with the new benchmarks. The FPI partition functions presented here are estimated to be converged to within a 2σ statistical uncertainty of between 0.04% and 0.07% for the given potential energy surface for temperatures in the range 300–3000 K and are the most accurately converged partition functions for a given potential energy surface for any molecule with five or more atoms. We also tabulate free energies, enthalpies, entropies, and heat capacities.

  15. Methods for estimating annual exceedance-probability discharges and largest recorded floods for unregulated streams in rural Missouri

    USGS Publications Warehouse

    Southard, Rodney E.; Veilleux, Andrea G.

    2014-01-01

    Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were

  16. Racial Models of the Consistency of Occupational Status Projections: Submodeling Using the Heise Path-Panel Method. Preliminary Draft.

    ERIC Educational Resources Information Center

    Cosby, Arthur G.; And Others

    This report focused on the goal of investigating, within a path analytic framework, the stability and interplay of two occupational status projection variables in a Texas sample. More specifically, the dynamics of occupational aspirations and occupational expectations, observed in a three-wave rural youth panel, were analyzed using the…

  17. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of

  18. A Finding Method of Business Risk Factors Using Characteristics of Probability Distributions of Effect Ratios on Qualitative and Quantitative Hybrid Simulation

    NASA Astrophysics Data System (ADS)

    Samejima, Masaki; Negoro, Keisuke; Mitsukuni, Koshichiro; Akiyoshi, Masanori

    We propose a finding method of business risk factors on qualitative and quantitative hybrid simulation in time series. Effect ratios of qualitative arcs in the hybrid simulation vary output values of the simulation, so we define effect ratios causing risk as business risk factors. Finding business risk factors in entire ranges of effect ratios is time-consuming. It is considered that probability distributions of effect ratios in present time step and ones in previous time step are similar, the probability distributions in present time step can be estimated. Our method finds business risk factors in only estimated ranges effectively. Experimental results show that a precision rate and a recall rate are 86%, and search time is decreased 20% at least.

  19. A Method to Estimate the Probability that Any Individual Cloud-to-Ground Lightning Stroke was Within Any Radius of Any Point

    NASA Technical Reports Server (NTRS)

    Huddleston, Lisa; Roeder, WIlliam P.; Merceret, Francis J.

    2011-01-01

    A new technique has been developed to estimate the probability that a nearby cloud-to-ground lightning stroke was within a specified radius of any point of interest. This process uses the bivariate Gaussian distribution of probability density provided by the current lightning location error ellipse for the most likely location of a lightning stroke and integrates it to determine the probability that the stroke is inside any specified radius of any location, even if that location is not centered on or even within the location error ellipse. This technique is adapted from a method of calculating the probability of debris collision with spacecraft. Such a technique is important in spaceport processing activities because it allows engineers to quantify the risk of induced current damage to critical electronics due to nearby lightning strokes. This technique was tested extensively and is now in use by space launch organizations at Kennedy Space Center and Cape Canaveral Air Force station. Future applications could include forensic meteorology.

  20. Mathematical analysis of the Saint-Venant-Hirano model and numerical solution by path-conservative methods.

    NASA Astrophysics Data System (ADS)

    Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid

    2014-05-01

    synchronous approach, by which all the variables are updated simultaneously. The non-conservative problem which stems from the developed matrix-vector formulation is solved using path-conservative methods. We perform numerical applications by comparison with the above linearised solutions and with the data from laboratory experiments. Results show that our solution approach is robust, general and accurate. References - Hirano, M. (1971), River bed degradation with armoring, Trans. Jpn. Soc. Civ. Eng., (3), 194-195. - Hirano, M. (1972), Studies on variation and equilibrium state of a river bed composed of nonuniform material, Trans. Jpn. Soc. Civ. Eng., (4), 128-129. - Stecca, G., A. Siviglia, and A. Blom, Mathematical analysis of the Saint-Venant-Hirano model for mixed-sediment morphodynamics, Submitted to Water Resources Research

  1. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  2. Methods for designing treatments to reduce interior noise of predominant sources and paths in a single engine light aircraft

    NASA Technical Reports Server (NTRS)

    Hayden, Richard E.; Remington, Paul J.; Theobald, Mark A.; Wilby, John F.

    1985-01-01

    The sources and paths by which noise enters the cabin of a small single engine aircraft were determined through a combination of flight and laboratory tests. The primary sources of noise were found to be airborne noise from the propeller and engine casing, airborne noise from the engine exhaust, structureborne noise from the engine/propeller combination and noise associated with air flow over the fuselage. For the propeller, the primary airborne paths were through the firewall, windshield and roof. For the engine, the most important airborne path was through the firewall. Exhaust noise was found to enter the cabin primarily through the panels in the vicinity of the exhaust outlet although exhaust noise entering the cabin through the firewall is a distinct possibility. A number of noise control techniques were tried, including firewall stiffening to reduce engine and propeller airborne noise, to stage isolators and engine mounting spider stiffening to reduce structure-borne noise, and wheel well covers to reduce air flow noise.

  3. Computation of Most Probable Numbers

    PubMed Central

    Russek, Estelle; Colwell, Rita R.

    1983-01-01

    A rapid computational method for maximum likelihood estimation of most-probable-number values, incorporating a modified Newton-Raphson method, is presented. The method offers a much greater reliability for the most-probable-number estimate of total viable bacteria, i.e., those capable of growth in laboratory media. PMID:6870242

  4. The albedo effect on neutron transmission probability.

    PubMed

    Khanouchi, A; Sabir, A; Boulkheir, M; Ichaoui, R; Ghassoun, J; Jehouani, A

    1997-01-01

    The aim of this study is to evaluate the albedo effect on the neutron transmission probability through slab shields. For this reason we have considered an infinite homogeneous slab having a fixed thickness equal to 20 lambda (lambda is the mean free path of the neutron in the slab). This slab is characterized by the factor Ps (scattering probability) and contains a vacuum channel which is formed by two horizontal parts and an inclined one (David, M. C. (1962) Duc and Voids in shields. In Reactor Handbook, Vol. III, Part B, p. 166). The thickness of the vacuum channel is taken equal to 2 lambda. An infinite plane source of neutrons is placed on the first of the slab (left face) and detectors, having windows equal to 2 lambda, are placed on the second face of the slab (right face). Neutron histories are sampled by the Monte Carlo method (Booth, T. E. and Hendricks, J. S. (1994) Nuclear Technology 5) using exponential biasing in order to increase the Monte Carlo calculation efficiency (Levitt, L. B. (1968) Nuclear Science and Engineering 31, 500-504; Jehouani, A., Ghassoun, J. and Abouker, A. (1994) In Proceedings of the 6th International Symposium on Radiation Physics, Rabat, Morocco) and we have applied the statistical weight method which supposes that the neutron is born at the source with a unit statistical weight and after each collision this weight is corrected. For different values of the scattering probability and for different slopes of the inclined part of the channel we have calculated the neutron transmission probability for different positions of the detectors versus the albedo at the vacuum channel-medium interface. Some analytical representations are also presented for these transmission probabilities. PMID:9463883

  5. Chemical potential and entropy in monodisperse and polydisperse hard-sphere fluids using Widom's particle insertion method and a pore size distribution-based insertion probability.

    PubMed

    Baranau, Vasili; Tallarek, Ulrich

    2016-06-01

    We estimate the excess chemical potential Δμ and excess entropy per particle Δs of computer-generated, monodisperse and polydisperse, frictionless hard-sphere fluids. For this purpose, we utilize the Widom particle insertion method, which for hard-sphere systems relates Δμ to the probability to successfully (without intersections) insert a particle into a system. This insertion probability is evaluated directly for each configuration of hard spheres by extrapolating to infinity the pore radii (nearest-surface) distribution and integrating its tail. The estimates of Δμ and Δs are compared to (and comply well with) predictions from the Boublík-Mansoori-Carnahan-Starling-Leland equation of state. For polydisperse spheres, we employ log-normal particle radii distributions with polydispersities δ = 0.1, 0.2, and 0.3. PMID:27276959

  6. Generalized method for probability-based peptide and protein identification from tandem mass spectrometry data and sequence database searching.

    PubMed

    Ramos-Fernández, Antonio; Paradela, Alberto; Navajas, Rosana; Albar, Juan Pablo

    2008-09-01

    Tandem mass spectrometry-based proteomics is currently in great demand of computational methods that facilitate the elimination of likely false positives in peptide and protein identification. In the last few years, a number of new peptide identification programs have been described, but scores or other significance measures reported by these programs cannot always be directly translated into an easy to interpret error rate measurement such as the false discovery rate. In this work we used generalized lambda distributions to model frequency distributions of database search scores computed by MASCOT, X!TANDEM with k-score plug-in, OMSSA, and InsPecT. From these distributions, we could successfully estimate p values and false discovery rates with high accuracy. From the set of peptide assignments reported by any of these engines, we also defined a generic protein scoring scheme that enabled accurate estimation of protein-level p values by simulation of random score distributions that was also found to yield good estimates of protein-level false discovery rate. The performance of these methods was evaluated by searching four freely available data sets ranging from 40,000 to 285,000 MS/MS spectra. PMID:18515861

  7. Assessment of the probability of failure for EC nondestructive testing based on intrusive spectral stochastic finite element method

    NASA Astrophysics Data System (ADS)

    Oudni, Zehor; Féliachi, Mouloud; Mohellebi, Hassane

    2014-06-01

    This work is undertaken to study the reliability of eddy current nondestructive testing (ED-NDT) when the defect concerns a change of physical property of the material. So, an intrusive spectral stochastic finite element method (SSFEM) is developed in the case of 2D electromagnetic harmonic equation. The electrical conductivity is considered as random variable and is developed in series of Hermite polynomials. The developed model is validated from measurements on NDT device and is applied to the assessment of the reliability of failure in steam generator tubing of nuclear power plants. The exploitation of the model concerns the impedance calculation of the sensor and the assessment of the reliability of failure. The random defect geometry is also considered and results are given.

  8. Oscillator strengths and transition probabilities from the Breit–Pauli R-matrix method: Ne IV

    SciTech Connect

    Nahar, Sultana N.

    2014-09-15

    The atomic parameters–oscillator strengths, line strengths, radiative decay rates (A), and lifetimes–for fine structure transitions of electric dipole (E1) type for the astrophysically abundant ion Ne IV are presented. The results include 868 fine structure levels with n≤ 10, l≤ 9, and 1/2≤J≤ 19/2 of even and odd parities, and the corresponding 83,767 E1 transitions. The calculations were carried out using the relativistic Breit–Pauli R-matrix method in the close coupling approximation. The transitions have been identified spectroscopically using an algorithm based on quantum defect analysis and other criteria. The calculated energies agree with the 103 observed and identified energies to within 3% or better for most of the levels. Some larger differences are also noted. The A-values show good to fair agreement with the very limited number of available transitions in the table compiled by NIST, but show very good agreement with the latest published multi-configuration Hartree–Fock calculations. The present transitions should be useful for diagnostics as well as for precise and complete spectral modeling in the soft X-ray to infra-red regions of astrophysical and laboratory plasmas. -- Highlights: •The first application of BPRM method for accurate E1 transitions in Ne IV is reported. •Amount of atomic data (n going up to 10) is complete for most practical applications. •The calculated energies are in very good agreement with most observed levels. •Very good agreement of A-values and lifetimes with other relativistic calculations. •The results should provide precise nebular abundances, chemical evolution etc.

  9. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  10. Finding the biased-shortest path with minimal congestion in networks via linear-prediction of queue length

    NASA Astrophysics Data System (ADS)

    Shen, Yi; Ren, Gang; Liu, Yang

    2016-06-01

    In this paper, we propose a biased-shortest path method with minimal congestion. In the method, we use linear-prediction to estimate the queue length of nodes, and propose a dynamic accepting probability function for nodes to decide whether accept or reject the incoming packets. The dynamic accepting probability function is based on the idea of homogeneous network flow and is developed to enable nodes to coordinate their queue length to avoid congestion. A path strategy incorporated with the linear-prediction of the queue length and the dynamic accepting probability function of nodes is designed to allow packets to be automatically delivered on un-congested paths with short traveling time. Our method has the advantage of low computation cost because the optimal paths are dynamically self-organized by nodes in the delivering process of packets with local traffic information. We compare our method with the existing methods such as the efficient path method (EPS) and the optimal path method (OPS) on the BA scale-free networks and a real example. The numerical computations show that our method performs best for low network load and has minimum run time due to its low computational cost and local routing scheme.

  11. Using Logistic Regression and Random Forests multivariate statistical methods for landslide spatial probability assessment in North-Est Sicily, Italy

    NASA Astrophysics Data System (ADS)

    Trigila, Alessandro; Iadanza, Carla; Esposito, Carlo; Scarascia-Mugnozza, Gabriele

    2015-04-01

    first phase of the work addressed to identify the spatial relationships between the landslides location and the 13 related factors by using the Frequency Ratio bivariate statistical method. The analysis was then carried out by adopting a multivariate statistical approach, according to the Logistic Regression technique and Random Forests technique that gave best results in terms of AUC. The models were performed and evaluated with different sample sizes and also taking into account the temporal variation of input variables such as burned areas by wildfire. The most significant outcome of this work are: the relevant influence of the sample size on the model results and the strong importance of some environmental factors (e.g. land use and wildfires) for the identification of the depletion zones of extremely rapid shallow landslides.

  12. Follow-up: Prospective compound design using the ‘SAR Matrix’ method and matrix-derived conditional probabilities of activity

    PubMed Central

    Gupta-Ostermann, Disha; Hirose, Yoichiro; Odagami, Takenao; Kouji, Hiroyuki; Bajorath, Jürgen

    2015-01-01

    In a previous Method Article, we have presented the ‘Structure-Activity Relationship (SAR) Matrix’ (SARM) approach. The SARM methodology is designed to systematically extract structurally related compound series from screening or chemical optimization data and organize these series and associated SAR information in matrices reminiscent of R-group tables. SARM calculations also yield many virtual candidate compounds that form a “chemical space envelope” around related series. To further extend the SARM approach, different methods are developed to predict the activity of virtual compounds. In this follow-up contribution, we describe an activity prediction method that derives conditional probabilities of activity from SARMs and report representative results of first prospective applications of this approach. PMID:25949808

  13. Nonadiabatic transition path sampling

    NASA Astrophysics Data System (ADS)

    Sherman, M. C.; Corcelli, S. A.

    2016-07-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  14. Nonadiabatic transition path sampling.

    PubMed

    Sherman, M C; Corcelli, S A

    2016-07-21

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase. PMID:27448877

  15. Double path integral method for obtaining the mobility of the one-dimensional charge transport in molecular chain.

    PubMed

    Yoo-Kong, Sikarin; Liewrian, Watchara

    2015-12-01

    We report on a theoretical investigation concerning the polaronic effect on the transport properties of a charge carrier in a one-dimensional molecular chain. Our technique is based on the Feynman's path integral approach. Analytical expressions for the frequency-dependent mobility and effective mass of the carrier are obtained as functions of electron-phonon coupling. The result exhibits the crossover from a nearly free particle to a heavily trapped particle. We find that the mobility depends on temperature and decreases exponentially with increasing temperature at low temperature. It exhibits large polaronic-like behaviour in the case of weak electron-phonon coupling. These results agree with the phase transition (A.S. Mishchenko et al., Phys. Rev. Lett. 114, 146401 (2015)) of transport phenomena related to polaron motion in the molecular chain. PMID:26701710

  16. Analysis of sample preparation procedures for enumerating fecal coliforms in coarse southwestern U.S. bottom sediments by the most-probable-number method.

    PubMed

    Doyle, J D; Tunnicliff, B; Kramer, R E; Brickler, S K

    1984-10-01

    The determination of bacterial densities in aquatic sediments generally requires that a dilution-mixing treatment be used before enumeration of organisms by the most-probable-number fermentation tube method can be done. Differential sediment and organism settling rates may, however, influence the distribution of the microbial population after the dilution-mixing process, resulting in biased bacterial density estimates. For standardization of sample preparation procedures, the influence of settling by suspended sediments on the fecal coliform distribution in a mixing vessel was examined. This was accomplished with both inoculated (Escherichia coli) and raw, uninoculated freshwater sediments from Saguaro Lake, Ariz. Both test sediments were coarse (greater than 90% gravel and sand). Coarse sediments are typical of southwestern U.S. lakes. The distribution of fecal coliforms, as determined by the most-probable-number method, was not significantly influenced by sediment settling and remained homogenous over a 16-min postmix period. The technique developed for coarse sediments may be useful for standardizing sample preparation techniques for other sediment types. PMID:6391380

  17. Path Integral Simulations of Graphene

    NASA Astrophysics Data System (ADS)

    Yousif, Hosam

    2007-10-01

    Some properties of graphene are explored using a path integral approach. The path integral method allows us to simulate relatively large systems using monte carlo techniques and extract thermodynamic quantities. We simulate the effects of screening a large external charge potential, as well as conductivity and charge distributions in graphene sheets.

  18. A GPU accelerated, discrete time random walk model for simulating reactive transport in porous media using colocation probability function based reaction methods

    NASA Astrophysics Data System (ADS)

    Barnard, J. M.; Augarde, C. E.

    2012-12-01

    The simulation of reactions in flow through unsaturated porous media is a more complicated process when using particle tracking based models than in continuum based models. In the fomer particles are reacted on an individual particle-to-particle basis using either deterministic or probabilistic methods. This means that particle tracking methods, especially when simulations of reactions are included, are computationally intensive as the reaction simulations require tens of thousands of nearest neighbour searches per time step. Despite this, particle tracking methods merit further study due to their ability to eliminate numerical dispersion, to simulate anomalous transport and incomplete mixing of reactive solutes. A new model has been developed using discrete time random walk particle tracking methods to simulate reactive mass transport in porous media which includes a variation of colocation probability function based methods of reaction simulation from those presented by Benson & Meerschaert (2008). Model development has also included code acceleration via graphics processing units (GPUs). The nature of particle tracking methods means that they are well suited to parallelization using GPUs. The architecture of GPUs is single instruction - multiple data (SIMD). This means that only one operation can be performed at any one time but can be performed on multiple data simultaneously. This allows for significant speed gains where long loops of independent operations are performed. Computationally expensive code elements, such the nearest neighbour searches required by the reaction simulation, are therefore prime targets for GPU acceleration.

  19. SWAPDT: A method for Short-time Withering Assessment of Probability for Drought Tolerance in Camellia sinensis validated by targeted metabolomics.

    PubMed

    Nyarukowa, Christopher; Koech, Robert; Loots, Theodor; Apostolides, Zeno

    2016-07-01

    Climate change is causing droughts affecting crop production on a global scale. Classical breeding and selection strategies for drought-tolerant cultivars will help prevent crop losses. Plant breeders, for all crops, need a simple and reliable method to identify drought-tolerant cultivars, but such a method is missing. Plant metabolism is often disrupted by abiotic stress conditions. To survive drought, plants reconfigure their metabolic pathways. Studies have documented the importance of metabolic regulation, i.e. osmolyte accumulation such as polyols and sugars (mannitol, sorbitol); amino acids (proline) during drought. This study identified and quantified metabolites in drought tolerant and drought susceptible Camellia sinensis cultivars under wet and drought stress conditions. For analyses, GC-MS and LC-MS were employed for metabolomics analysis.%RWC results show how the two drought tolerant and two drought susceptible cultivars differed significantly (p≤0.05) from one another; the drought susceptible exhibited rapid water loss compared to the drought tolerant. There was a significant variation (p<0.05) in metabolite content (amino acid, sugars) between drought tolerant and drought susceptible tea cultivars after short-time withering conditions. These metabolite changes were similar to those seen in other plant species under drought conditions, thus validating this method. The Short-time Withering Assessment of Probability for Drought Tolerance (SWAPDT) method presented here provides an easy method to identify drought tolerant tea cultivars that will mitigate the effects of drought due to climate change on crop losses. PMID:27137993

  20. Integrating the probability integral method for subsidence prediction and differential synthetic aperture radar interferometry for monitoring mining subsidence in Fengfeng, China

    NASA Astrophysics Data System (ADS)

    Diao, Xinpeng; Wu, Kan; Zhou, Dawei; Li, Liang

    2016-01-01

    Differential synthetic aperture radar interferometry (D-InSAR) is characterized mainly by high spatial resolution and high accuracy over a wide coverage range. Because of its unique advantages, the technology is widely used for monitoring ground surface deformations. However, in coal mining areas, the ground surface can suffer large-scale collapses in short periods of time, leading to inaccuracies in D-InSAR results and limiting its use for monitoring mining subsidence. We propose a data-processing method that overcomes these disadvantages by combining D-InSAR with the probability integral method used for predicting mining subsidence. Five RadarSat-2 images over Fengfeng coal mine, China, were used to demonstrate the proposed method and assess its effectiveness. Using this method, surface deformation could be monitored over an area of thousands of square kilometers, and more than 50 regions affected by subsidence were identified. For Jiulong mine, nonlinear subsidence cumulative results were obtained for a time period from January 2011 to April 2011, and the maximum subsidence value reached up to 299 mm. Finally, the efficiency and applicability of the proposed method were verified by comparing with data from leveling surveying.

  1. Probability 1/e

    ERIC Educational Resources Information Center

    Koo, Reginald; Jones, Martin L.

    2011-01-01

    Quite a number of interesting problems in probability feature an event with probability equal to 1/e. This article discusses three such problems and attempts to explain why this probability occurs with such frequency.

  2. A low false negative filter for detecting rare bird species from short video segments using a probable observation data set-based EKF method.

    PubMed

    Song, Dezhen; Xu, Yiliang

    2010-09-01

    We report a new filter to assist the search for rare bird species. Since a rare bird only appears in front of a camera with very low occurrence (e.g., less than ten times per year) for very short duration (e.g., less than a fraction of a second), our algorithm must have a very low false negative rate. We verify the bird body axis information with the known bird flying dynamics from the short video segment. Since a regular extended Kalman filter (EKF) cannot converge due to high measurement error and limited data, we develop a novel probable observation data set (PODS)-based EKF method. The new PODS-EKF searches the measurement error range for all probable observation data that ensures the convergence of the corresponding EKF in short time frame. The algorithm has been extensively tested using both simulated inputs and real video data of four representative bird species. In the physical experiments, our algorithm has been tested on rock pigeons and red-tailed hawks with 119 motion sequences. The area under the ROC curve is 95.0%. During the one-year search of ivory-billed woodpeckers, the system reduces the raw video data of 29.41 TB to only 146.7 MB (reduction rate 99.9995%). PMID:20388596

  3. Bragg peak prediction from quantitative proton computed tomography using different path estimates.

    PubMed

    Wang, Dongxu; Mackie, T Rockwell; Tomé, Wolfgang A

    2011-02-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ∼0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  4. Combining the least cost path method with population genetic data and species distribution models to identify landscape connectivity during the late Quaternary in Himalayan hemlock.

    PubMed

    Yu, Haibin; Zhang, Yili; Liu, Linshan; Qi, Wei; Li, Shicheng; Hu, Zhongjun

    2015-12-01

    Himalayan hemlock (Tsuga dumosa) experienced a recolonization event during the Quaternary period; however, the specific dispersal routes are remain unknown. Recently, the least cost path (LCP) calculation coupled with population genetic data and species distribution models has been applied to reveal the landscape connectivity. In this study, we utilized the categorical LCP method, combining species distribution of three periods (the last interglacial, the last glacial maximum, and the current period) and locality with shared chloroplast, mitochondrial, and nuclear haplotypes, to identify the possible dispersal routes of T. dumosa in the late Quaternary. Then, both a coalescent estimate of migration rates among regional groups and establishment of genetic divergence pattern were conducted. After those analyses, we found that the species generally migrated along the southern slope of Himalaya across time periods and genomic makers, and higher degree of dispersal was in the present and mtDNA haplotype. Furthermore, the direction of range shifts and strong level of gene flow also imply the existence of Himalayan dispersal path, and low area of genetic divergence pattern suggests that there are not any obvious barriers against the dispersal pathway. Above all, we inferred that a dispersal route along the Himalaya Mountains could exist, which is an important supplement for the evolutionary history of T. dumosa. Finally, we believed that this integrative genetic and geospatial method would bring new implications for the evolutionary process and conservation priority of species in the Tibetan Plateau. PMID:26811753

  5. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models. PMID:23124484

  6. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  7. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are <1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  8. Measurement Uncertainty and Probability

    NASA Astrophysics Data System (ADS)

    Willink, Robin

    2013-02-01

    Part I. Principles: 1. Introduction; 2. Foundational ideas in measurement; 3. Components of error or uncertainty; 4. Foundational ideas in probability and statistics; 5. The randomization of systematic errors; 6. Beyond the standard confidence interval; Part II. Evaluation of Uncertainty: 7. Final preparation; 8. Evaluation using the linear approximation; 9. Evaluation without the linear approximations; 10. Uncertainty information fit for purpose; Part III. Related Topics: 11. Measurement of vectors and functions; 12. Why take part in a measurement comparison?; 13. Other philosophies; 14. An assessment of objective Bayesian methods; 15. A guide to the expression of uncertainty in measurement; 16. Measurement near a limit - an insoluble problem?; References; Index.

  9. Continuous-Energy Adjoint Flux and Perturbation Calculation using the Iterated Fission Probability Method in Monte Carlo Code TRIPOLI-4® and Underlying Applications

    NASA Astrophysics Data System (ADS)

    Truchet, G.; Leconte, P.; Peneliau, Y.; Santamarina, A.; Malvagi, F.

    2014-06-01

    Pile-oscillation experiments are performed in the MINERVE reactor at the CEA Cadarache to improve nuclear data accuracy. In order to precisely calculate small reactivity variations (<10 pcm) obtained in these experiments, a reference calculation need to be achieved. This calculation may be accomplished using the continuous-energy Monte Carlo code TRIPOLI-4® by using the eigenvalue difference method. This "direct" method has shown limitations in the evaluation of very small reactivity effects because it needs to reach a very small variance associated to the reactivity in both states. To answer this problem, it has been decided to implement the exact perturbation theory in TRIPOLI-4® and, consequently, to calculate a continuous-energy adjoint flux. The Iterated Fission Probability (IFP) method was chosen because it has shown great results in some other Monte Carlo codes. The IFP method uses a forward calculation to compute the adjoint flux, and consequently, it does not rely on complex code modifications but on the physical definition of the adjoint flux as a phase-space neutron importance. In the first part of this paper, the IFP method implemented in TRIPOLI-4® is described. To illustrate the effciency of the method, several adjoint fluxes are calculated and compared with their equivalent obtained by the deterministic code APOLLO-2. The new implementation can calculate angular adjoint flux. In the second part, a procedure to carry out an exact perturbation calculation is described. A single cell benchmark has been used to test the accuracy of the method, compared with the "direct" estimation of the perturbation. Once again the method based on the IFP shows good agreement for a calculation time far more inferior to the "direct" method. The main advantage of the method is that the relative accuracy of the reactivity variation does not depend on the magnitude of the variation itself, which allows us to calculate very small reactivity perturbations with high

  10. Review of pipe-break probability assessment methods and data for applicability to the advanced neutron source project for Oak Ridge National Laboratory

    SciTech Connect

    Fullwood, R.R.

    1989-04-01

    The Advanced Neutron Source (ANS) (Difilippo, 1986; Gamble, 1986; West, 1986; Selby, 1987) will be the world's best facility for low energy neutron research. This performance requires the highest flux density of all non-pulsed reactors with concomitant low thermal inertial and fast response to upset conditions. One of the primary concerns is that a flow cessation of the order of a second may result in fuel damage. Such a flow stoppage could be the result of break in the primary piping. This report is a review of methods for assessing pipe break probabilities based on historical operating experience in power reactors, scaling methods, fracture mechanics and fracture growth models. The goal of this work is to develop parametric guidance for the ANS design to make the event highly unlikely. It is also to review and select methods that may be used in an interactive IBM-PC model providing fast and reasonably accurate models to aid the ANS designers in achieving the safety requirements. 80 refs., 7 figs.

  11. On Probability Domains III

    NASA Astrophysics Data System (ADS)

    Frič, Roman; Papčo, Martin

    2015-12-01

    Domains of generalized probability have been introduced in order to provide a general construction of random events, observables and states. It is based on the notion of a cogenerator and the properties of product. We continue our previous study and show how some other quantum structures fit our categorical approach. We discuss how various epireflections implicitly used in the classical probability theory are related to the transition to fuzzy probability theory and describe the latter probability theory as a genuine categorical extension of the former. We show that the IF-probability can be studied via the fuzzy probability theory. We outline a "tensor modification" of the fuzzy probability theory.

  12. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decisionmaker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its content

  13. Critical Path Web Site

    NASA Technical Reports Server (NTRS)

    Robinson, Judith L.; Charles, John B.; Rummel, John A. (Technical Monitor)

    2000-01-01

    Approximately three years ago, the Agency's lead center for the human elements of spaceflight (the Johnson Space Center), along with the National Biomedical Research Institute (NSBRI) (which has the lead role in developing countermeasures) initiated an activity to identify the most critical risks confronting extended human spaceflight. Two salient factors influenced this activity: first, what information is needed to enable a "go/no go" decision to embark on extended human spaceflight missions; and second, what knowledge and capabilities are needed to address known and potential health, safety and performance risks associated with such missions. A unique approach was used to first define and assess those risks, and then to prioritize them. This activity was called the Critical Path Roadmap (CPR) and it represents an opportunity to develop and implement a focused and evolving program of research and technology designed from a "risk reduction" perspective to prevent or minimize the risks to humans exposed to the space environment. The Critical Path Roadmap provides the foundation needed to ensure that human spaceflight, now and in the future, is as safe, productive and healthy as possible (within the constraints imposed on any particular mission) regardless of mission duration or destination. As a tool, the Critical Path Roadmap enables the decision maker to select from among the demonstrated or potential risks those that are to be mitigated, and the completeness of that mitigation. The primary audience for the CPR Web Site is the members of the scientific community who are interested in the research and technology efforts required for ensuring safe and productive human spaceflight. They may already be informed about the various space life sciences research programs or they may be newcomers. Providing the CPR content to potential investigators increases the probability of their delivering effective risk mitigations. Others who will use the CPR Web Site and its

  14. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  15. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  16. Curved planar reformation and optimal path tracing (CROP) method for false positive reduction in computer-aided detection of pulmonary embolism in CTPA

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Guo, Yanhui; Wei, Jun; Chughtai, Aamer; Hadjiiski, Lubomir M.; Sundaram, Baskaran; Patel, Smita; Kuriakose, Jean W.; Kazerooni, Ella A.

    2013-03-01

    The curved planar reformation (CPR) method re-samples the vascular structures along the vessel centerline to generate longitudinal cross-section views. The CPR technique has been commonly used in coronary CTA workstation to facilitate radiologists' visual assessment of coronary diseases, but has not yet been used for pulmonary vessel analysis in CTPA due to the complicated tree structures and the vast network of pulmonary vasculature. In this study, a new curved planar reformation and optimal path tracing (CROP) method was developed to facilitate feature extraction and false positive (FP) reduction and improve our PE detection system. PE candidates are first identified in the segmented pulmonary vessels at prescreening. Based on Dijkstra's algorithm, the optimal path (OP) is traced from the pulmonary trunk bifurcation point to each PE candidate. The traced vessel is then straightened and a reformatted volume is generated using CPR. Eleven new features that characterize the intensity, gradient, and topology are extracted from the PE candidate in the CPR volume and combined with the previously developed 9 features to form a new feature space for FP classification. With IRB approval, CTPA of 59 PE cases were retrospectively collected from our patient files (UM set) and 69 PE cases from the PIOPED II data set with access permission. 595 and 800 PEs were manually marked by experienced radiologists as reference standard for the UM and PIOPED set, respectively. At a test sensitivity of 80%, the average FP rate was improved from 18.9 to 11.9 FPs/case with the new method for the PIOPED set when the UM set was used for training. The FP rate was improved from 22.6 to 14.2 FPs/case for the UM set when the PIOPED set was used for training. The improvement in the free response receiver operating characteristic (FROC) curves was statistically significant (p<0.05) by JAFROC analysis, indicating that the new features extracted from the CROP method are useful for FP reduction.

  17. A Microwave Radiometric Method to Obtain the Average Path Profile of Atmospheric Temperature and Humidity Structure Parameters and Its Application to Optical Propagation System Assessment

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.; Vyhnalek, Brian E.

    2015-01-01

    The values of the key atmospheric propagation parameters Ct2, Cq2, and Ctq are highly dependent upon the vertical height within the atmosphere thus making it necessary to specify profiles of these values along the atmospheric propagation path. The remote sensing method suggested and described in this work makes use of a rapidly integrating microwave profiling radiometer to capture profiles of temperature and humidity through the atmosphere. The integration times of currently available profiling radiometers are such that they are approaching the temporal intervals over which one can possibly make meaningful assessments of these key atmospheric parameters. Since these parameters are fundamental to all propagation conditions, they can be used to obtain Cn2 profiles for any frequency, including those for an optical propagation path. In this case the important performance parameters of the prevailing isoplanatic angle and Greenwood frequency can be obtained. The integration times are such that Kolmogorov turbulence theory and the Taylor frozen-flow hypothesis must be transcended. Appropriate modifications to these classical approaches are derived from first principles and an expression for the structure functions are obtained. The theory is then applied to an experimental scenario and shows very good results.

  18. Reaching the Hard-to-Reach: A Probability Sampling Method for Assessing Prevalence of Driving under the Influence after Drinking in Alcohol Outlets

    PubMed Central

    De Boni, Raquel; do Nascimento Silva, Pedro Luis; Bastos, Francisco Inácio; Pechansky, Flavio; de Vasconcellos, Mauricio Teixeira Leite

    2012-01-01

    Drinking alcoholic beverages in places such as bars and clubs may be associated with harmful consequences such as violence and impaired driving. However, methods for obtaining probabilistic samples of drivers who drink at these places remain a challenge – since there is no a priori information on this mobile population – and must be continually improved. This paper describes the procedures adopted in the selection of a population-based sample of drivers who drank at alcohol selling outlets in Porto Alegre, Brazil, which we used to estimate the prevalence of intention to drive under the influence of alcohol. The sampling strategy comprises a stratified three-stage cluster sampling: 1) census enumeration areas (CEA) were stratified by alcohol outlets (AO) density and sampled with probability proportional to the number of AOs in each CEA; 2) combinations of outlets and shifts (COS) were stratified by prevalence of alcohol-related traffic crashes and sampled with probability proportional to their squared duration in hours; and, 3) drivers who drank at the selected COS were stratified by their intention to drive and sampled using inverse sampling. Sample weights were calibrated using a post-stratification estimator. 3,118 individuals were approached and 683 drivers interviewed, leading to an estimate that 56.3% (SE = 3,5%) of the drivers intended to drive after drinking in less than one hour after the interview. Prevalence was also estimated by sex and broad age groups. The combined use of stratification and inverse sampling enabled a good trade-off between resource and time allocation, while preserving the ability to generalize the findings. The current strategy can be viewed as a step forward in the efforts to improve surveys and estimation for hard-to-reach, mobile populations. PMID:22514620

  19. Opportunity's Path

    NASA Technical Reports Server (NTRS)

    2004-01-01

    fifteen sols. This will include El Capitan and probably one to two other areas.

    Blue Dot Dates Sol 7 / Jan 31 = Egress & first soil data collected by instruments on the arm Sol 9 / Feb 2 = Second Soil Target Sol 12 / Feb 5 = First Rock Target Sol 16 / Feb 9 = Alpha Waypoint Sol 17 / Feb 10 = Bravo Waypoint Sol 19 or 20 / Feb 12 or 13 = Charlie Waypoint

  20. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach

    SciTech Connect

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes.

  1. Incorporating a completely renormalized coupled cluster approach into a composite method for thermodynamic properties and reaction paths

    SciTech Connect

    Nedd, Sean; DeYonker, Nathan; Wilson, Angela; Piecuch, Piotr; Gordon, Mark

    2012-04-12

    The correlation consistent composite approach (ccCA), using the S4 complete basis set two-point extrapolation scheme (ccCA-S4), has been modified to incorporate the left-eigenstate completely renormalized coupled cluster method, including singles, doubles, and non-iterative triples (CR-CC(2,3)) as the highest level component. The new ccCA-CC(2,3) method predicts thermodynamic properties with an accuracy that is similar to that of the original ccCA-S4 method. At the same time, the inclusion of the single-reference CR-CC(2,3) approach provides a ccCA scheme that can correctly treat reaction pathways that contain certain classes of multi-reference species such as diradicals, which would normally need to be treated by more computationally demanding multi-reference methods. The new ccCA-CC(2,3) method produces a mean absolute deviation of 1.7 kcal/mol for predicted heats of formation at 298 K, based on calibration with the G2/97 set of 148 molecules, which is comparable to that of 1.0 kcal/mol obtained using the ccCA-S4 method, while significantly improving the performance of the ccCA-S4 approach in calculations involving more demanding radical and diradical species. Both the ccCA-CC(2,3) and ccCA-S4 composite methods are used to characterize the conrotatory and disrotatory isomerization pathways of bicyclo[1.1.0]butane to trans-1,3-butadiene, for which conventional coupled cluster methods, such as the CCSD(T) approach used in the ccCA-S4 model and, in consequence, the ccCA-S4 method itself might fail by incorrectly placing the disrotatory pathway below the conrotatory one. The ccCA-CC(2,3) scheme provides correct pathway ordering while providing an accurate description of the activation and reaction energies characterizing the lowest-energy conrotatory pathway. The ccCA-CC(2,3) method is thus a viable method for the analyses of reaction mechanisms that have significant multi-reference character, and presents a generally less computationally intensive alternative to

  2. Calculation of correlated initial state in the hierarchical equations of motion method using an imaginary time path integral approach.

    PubMed

    Song, Linze; Shi, Qiang

    2015-11-21

    Based on recent findings in the hierarchical equations of motion (HEOM) for correlated initial state [Y. Tanimura, J. Chem. Phys. 141, 044114 (2014)], we propose a new stochastic method to obtain the initial conditions for the real time HEOM propagation, which can be used further to calculate the equilibrium correlation functions and symmetrized correlation functions. The new method is derived through stochastic unraveling of the imaginary time influence functional, where a set of stochastic imaginary time HEOM are obtained. The validity of the new method is demonstrated using numerical examples including the spin-Boson model, and the Holstein model with undamped harmonic oscillator modes. PMID:26590526

  3. Simulating biochemical physics with computers: 1. Enzyme catalysis by phosphotriesterase and phosphodiesterase; 2. Integration-free path-integral method for quantum-statistical calculations

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu

    We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.

  4. Speed-up hyperspheres homotopic path tracking algorithm for PWL circuits simulations.

    PubMed

    Ramirez-Pinero, A; Vazquez-Leal, H; Jimenez-Fernandez, V M; Sedighi, H M; Rashidi, M M; Filobello-Nino, U; Castaneda-Sheissa, R; Huerta-Chua, J; Sarmiento-Reyes, L A; Laguna-Camacho, J R; Castro-Gonzalez, F

    2016-01-01

    In the present work, we introduce an improved version of the hyperspheres path tracking method adapted for piecewise linear (PWL) circuits. This enhanced version takes advantage of the PWL characteristics from the homotopic curve, achieving faster path tracking and improving the performance of the homotopy continuation method (HCM). Faster computing time allows the study of complex circuits with higher complexity; the proposed method also decrease, significantly, the probability of having a diverging problem when using the Newton-Raphson method because it is applied just twice per linear region on the homotopic path. Equilibrium equations of the studied circuits are obtained applying the modified nodal analysis; this method allows to propose an algorithm for nonlinear circuit analysis. Besides, a starting point criteria is proposed to obtain better performance of the HCM and a technique for avoiding the reversion phenomenon is also proposed. To prove the efficiency of the path tracking method, several cases study with bipolar (BJT) and CMOS transistors are provided. Simulation results show that the proposed approach can be up to twelve times faster than the original path tracking method and also helps to avoid several reversion cases that appears when original hyperspheres path tracking scheme was employed. PMID:27386338

  5. Forecasting the path of a laterally propagating dike

    NASA Astrophysics Data System (ADS)

    Heimisson, Elías Rafn; Hooper, Andrew; Sigmundsson, Freysteinn

    2015-12-01

    An important aspect of eruption forecasting is predicting the path of propagating dikes. We show how lateral dike propagation can be forecast using the minimum potential energy principle. We compare theory to observed propagation paths of dikes originating at the Bárðarbunga volcano, Iceland, in 2014 and 1996, by developing a probability distribution for the most likely propagation path. The observed propagation paths agree well with the model prediction. We find that topography is very important for the model, and our preferred forecasting model considers its influence on the potential energy change of the crust and magma. We tested the influence of topography by running the model assuming no topography and found that the path of the 2014 dike could not be hindcasted. The results suggest that lateral dike propagation is governed not only by deviatoric stresses but also by pressure gradients and gravitational potential energy. Furthermore, the model predicts the formation of curved dikes around cone-shaped structures without the assumption of a local deviatoric stress field. We suggest that a likely eruption site for a laterally propagating dike is in topographic lows. The method presented here is simple and computationally feasible. Our results indicate that this kind of a model can be applied to mitigate volcanic hazards in regions where the tectonic setting promotes formation of laterally propagating vertical intrusive sheets.

  6. Probability and Relative Frequency

    NASA Astrophysics Data System (ADS)

    Drieschner, Michael

    2016-01-01

    The concept of probability seems to have been inexplicable since its invention in the seventeenth century. In its use in science, probability is closely related with relative frequency. So the task seems to be interpreting that relation. In this paper, we start with predicted relative frequency and show that its structure is the same as that of probability. I propose to call that the `prediction interpretation' of probability. The consequences of that definition are discussed. The "ladder"-structure of the probability calculus is analyzed. The expectation of the relative frequency is shown to be equal to the predicted relative frequency. Probability is shown to be the most general empirically testable prediction.

  7. A transported probability density function/photon Monte Carlo method for high-temperature oxy-natural gas combustion with spectral gas and wall radiation

    NASA Astrophysics Data System (ADS)

    Zhao, X. Y.; Haworth, D. C.; Ren, T.; Modest, M. F.

    2013-04-01

    A computational fluid dynamics model for high-temperature oxy-natural gas combustion is developed and exercised. The model features detailed gas-phase chemistry and radiation treatments (a photon Monte Carlo method with line-by-line spectral resolution for gas and wall radiation - PMC/LBL) and a transported probability density function (PDF) method to account for turbulent fluctuations in composition and temperature. The model is first validated for a 0.8 MW oxy-natural gas furnace, and the level of agreement between model and experiment is found to be at least as good as any that has been published earlier. Next, simulations are performed with systematic model variations to provide insight into the roles of individual physical processes and their interplay in high-temperature oxy-fuel combustion. This includes variations in the chemical mechanism and the radiation model, and comparisons of results obtained with versus without the PDF method to isolate and quantify the effects of turbulence-chemistry interactions and turbulence-radiation interactions. In this combustion environment, it is found to be important to account for the interconversion of CO and CO2, and radiation plays a dominant role. The PMC/LBL model allows the effects of molecular gas radiation and wall radiation to be clearly separated and quantified. Radiation and chemistry are tightly coupled through the temperature, and correct temperature prediction is required for correct prediction of the CO/CO2 ratio. Turbulence-chemistry interactions influence the computed flame structure and mean CO levels. Strong local effects of turbulence-radiation interactions are found in the flame, but the net influence of TRI on computed mean temperature and species profiles is small. The ultimate goal of this research is to simulate high-temperature oxy-coal combustion, where accurate treatments of chemistry, radiation and turbulence-chemistry-particle-radiation interactions will be even more important.

  8. Evolution and Probability.

    ERIC Educational Resources Information Center

    Bailey, David H.

    2000-01-01

    Some of the most impressive-sounding criticisms of the conventional theory of biological evolution involve probability. Presents a few examples of how probability should and should not be used in discussing evolution. (ASK)

  9. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  10. Probability on a Budget.

    ERIC Educational Resources Information Center

    Ewbank, William A.; Ginther, John L.

    2002-01-01

    Describes how to use common dice numbered 1-6 for simple mathematical situations including probability. Presents a lesson using regular dice and specially marked dice to explore some of the concepts of probability. (KHR)

  11. Carbon dioxide and water budget of grazed grassland in Grünschwaige (Munich, Bavaria) measured by EC-method with an open path gas analyzer

    NASA Astrophysics Data System (ADS)

    Vetter, S.; Bernhofer, Ch.; Auerswald, K.

    2009-04-01

    Terrestrial ecosystems like grasslands can act as a sink or source for greenhouse gases (GHG) like carbon dioxide. This is important for scientific and political stakeholders as GHG cause the climate change. The eddy covariance method has become a major tool for quantifying such fluxes. It depends, however, on a number of corrections applied to the measured data. The influence of air density is often considered following the WPL-correction (Webb-Pearman-Leuning), which does not take the heating of the instrument surface into account in contrast to the recently published method by Burba et al. (2008). The aim of the study is the comparison of the influence of the two density correction on the CO2 fluxes. The fluxes of water and carbon dioxide were measured with the eddy covariance method from 2002 to 2008 on a grazed grassland site located in Grünschwaige close to Munich (Bavaria) in the South of Germany. The climate in this area is temperate with an annual precipitation of 800 mm and an annual mean temperature of 9 °C. For eddy covariance measurements an open path CO2/H2O analyzer was used. Wind speed (3D) and temperature were measured by a sonic anemometer. The sensible/latent heat flux and the carbon dioxide flux were calculated and corrected using EdiRe. The application of the two density correction methods resulted in important differences of the carbon dioxide flux. The fluxes corrected according to Burba et al. (2008) indicated small CO2 sinks (= negative net carbon exchange) or even sources while the WPL-correction showed (larger) CO2 sinks. Additionally, with both correction methods the results showed a high sensitivity to weather conditions but the effects were stronger using the correction following Burba et al. (2008). The most important drivers of flux variability were precipitation and temperature. The seasonal pattern of precipitation was important especially during the vegetation period. Drought and heat periods, which lasted at last one month like

  12. Path Separability of Graphs

    NASA Astrophysics Data System (ADS)

    Diot, Emilie; Gavoille, Cyril

    In this paper we investigate the structural properties of k-path separable graphs, that are the graphs that can be separated by a set of k shortest paths. We identify several graph families having such path separability, and we show that this property is closed under minor taking. In particular we establish a list of forbidden minors for 1-path separable graphs.

  13. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  14. Local OH concentration measurement in atmospheric pressure flames by a laser-saturated fluorescence method: two-optical path laser-induced fluorescence.

    PubMed

    Desgroux, P; Cottereau, M J

    1991-01-01

    The first (to our knowledge) measurements of number density of OH in flames at atmospheric pressure by TOPLIF are reported. TOPLIF (acronym for two optical paths laser-induced fluorescence) improves the accuracy of LIF measurements by taking into account both the spatial profile of the exciting laser intensity and the collisional transfer rate. The method is based on simultaneously recording the LIF signals from focal volumes of two different shapes. The ratio of the signals is a measure of the saturation parameter (which depends on the laser intensity and the quenching) using which accurate determination of the species number density can be deduced from the fluorescence signals. The method is valid as far as at least partial saturation is reached. First, experimental verification of the theoretical basis of the method is reported. The population of a single rovibronic level is measured as it is in most of the spectroscopic methods. TOPLIF measures this population relative to this level's population in a chosen reference flame. Absolute value can therefore be obtained if the value in the reference flame is known or measured. Absolute [OH] profiles obtained in flat flames burning at 60 and 1000 mb are presented and compared to laser absorption measurements. PMID:20581952

  15. Dependent Probability Spaces

    ERIC Educational Resources Information Center

    Edwards, William F.; Shiflett, Ray C.; Shultz, Harris

    2008-01-01

    The mathematical model used to describe independence between two events in probability has a non-intuitive consequence called dependent spaces. The paper begins with a very brief history of the development of probability, then defines dependent spaces, and reviews what is known about finite spaces with uniform probability. The study of finite…

  16. Searching with probabilities

    SciTech Connect

    Palay, A.J.

    1985-01-01

    This book examines how probability distributions can be used as a knowledge representation technique. It presents a mechanism that can be used to guide a selective search algorithm to solve a variety of tactical chess problems. Topics covered include probabilities and searching the B algorithm and chess probabilities - in practice, examples, results, and future work.

  17. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  18. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the "unusual" properties of the WV, and avoid the "paradoxes" often associated with the WM.

  19. An Inviscid Decoupled Method for the Roe FDS Scheme in the Reacting Gas Path of FUN3D

    NASA Technical Reports Server (NTRS)

    Thompson, Kyle B.; Gnoffo, Peter A.

    2016-01-01

    An approach is described to decouple the species continuity equations from the mixture continuity, momentum, and total energy equations for the Roe flux difference splitting scheme. This decoupling simplifies the implicit system, so that the flow solver can be made significantly more efficient, with very little penalty on overall scheme robustness. Most importantly, the computational cost of the point implicit relaxation is shown to scale linearly with the number of species for the decoupled system, whereas the fully coupled approach scales quadratically. Also, the decoupled method significantly reduces the cost in wall time and memory in comparison to the fully coupled approach. This work lays the foundation for development of an efficient adjoint solution procedure for high speed reacting flow.

  20. In All Probability, Probability is not All

    ERIC Educational Resources Information Center

    Helman, Danny

    2004-01-01

    The national lottery is often portrayed as a game of pure chance with no room for strategy. This misperception seems to stem from the application of probability instead of expectancy considerations, and can be utilized to introduce the statistical concept of expectation.

  1. Identifying the main paths of information diffusion in online social networks

    NASA Astrophysics Data System (ADS)

    Zhu, Hengmin; Yin, Xicheng; Ma, Jing; Hu, Wei

    2016-06-01

    Recently, an increasing number of researches on relationship strength show that there are some socially active links in online social networks. Furthermore, it is likely that there exist main paths which play the most significant role in the process of information diffusion. Although much of previous work has focused on the pathway of a specific event, there are hardly any scholars that have extracted the main paths. To identify the main paths of online social networks, we proposed a method which measures the weights of links based on historical interaction records. The influence of node based on forwarding amount is quantified and top-ranked nodes are selected as the influential users. The path importance is evaluated by calculating the probability that a message would spread via this path. We applied our method to a real-world network and found interesting insights. Each influential user can access another one via a short main path and the distribution of main paths shows significant community effect.

  2. Path Integrals and Supersolids

    NASA Astrophysics Data System (ADS)

    Ceperley, D. M.

    2008-11-01

    Recent experiments by Kim and Chan on solid 4He have been interpreted as discovery of a supersolid phase of matter. Arguments based on wavefunctions have shown that such a phase exists, but do not necessarily apply to solid 4He. Imaginary time path integrals, implemented using Monte Carlo methods, provide a definitive answer; a clean system of solid 4He should be a normal quantum solid, not one with superfluid properties. The Kim-Chan phenomena must be due to defects introduced when the solid is formed.

  3. Shortest path and Schramm-Loewner Evolution

    PubMed Central

    Posé, N.; Schrenk, K. J.; Araújo, N. A. M.; Herrmann, H. J.

    2014-01-01

    We numerically show that the statistical properties of the shortest path on critical percolation clusters are consistent with the ones predicted for Schramm-Loewner evolution (SLE) curves for κ = 1.04 ± 0.02. The shortest path results from a global optimization process. To identify it, one needs to explore an entire area. Establishing a relation with SLE permits to generate curves statistically equivalent to the shortest path from a Brownian motion. We numerically analyze the winding angle, the left passage probability, and the driving function of the shortest path and compare them to the distributions predicted for SLE curves with the same fractal dimension. The consistency with SLE opens the possibility of using a solid theoretical framework to describe the shortest path and it raises relevant questions regarding conformal invariance and domain Markov properties, which we also discuss. PMID:24975019

  4. Assessment of Rainfall Estimates Using a Standard Z-R Relationship and the Probability Matching Method Applied to Composite Radar Data in Central Florida

    NASA Technical Reports Server (NTRS)

    Crosson, William L.; Duchon, Claude E.; Raghavan, Ravikumar; Goodman, Steven J.

    1996-01-01

    Precipitation estimates from radar systems are a crucial component of many hydrometeorological applications, from flash flood forecasting to regional water budget studies. For analyses on large spatial scales and long timescales, it is frequently necessary to use composite reflectivities from a network of radar systems. Such composite products are useful for regional or national studies, but introduce a set of difficulties not encountered when using single radars. For instance, each contributing radar has its own calibration and scanning characteristics, but radar identification may not be retained in the compositing procedure. As a result, range effects on signal return cannot be taken into account. This paper assesses the accuracy with which composite radar imagery can be used to estimate precipitation in the convective environment of Florida during the summer of 1991. Results using Z = 30OR(sup 1.4) (WSR-88D default Z-R relationship) are compared with those obtained using the probability matching method (PMM). Rainfall derived from the power law Z-R was found to he highly biased (+90%-l10%) compared to rain gauge measurements for various temporal and spatial integrations. Application of a 36.5-dBZ reflectivity threshold (determined via the PMM) was found to improve the performance of the power law Z-R, reducing the biases substantially to 20%-33%. Correlations between precipitation estimates obtained with either Z-R relationship and mean gauge values are much higher for areal averages than for point locations. Precipitation estimates from the PMM are an improvement over those obtained using the power law in that biases and root-mean-square errors are much lower. The minimum timescale for application of the PMM with the composite radar dataset was found to be several days for area-average precipitation. The minimum spatial scale is harder to quantify, although it is concluded that it is less than 350 sq km. Implications relevant to the WSR-88D system are

  5. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  6. A Posteriori Transit Probabilities

    NASA Astrophysics Data System (ADS)

    Stevens, Daniel J.; Gaudi, B. Scott

    2013-08-01

    Given the radial velocity (RV) detection of an unseen companion, it is often of interest to estimate the probability that the companion also transits the primary star. Typically, one assumes a uniform distribution for the cosine of the inclination angle i of the companion's orbit. This yields the familiar estimate for the prior transit probability of ~Rlowast/a, given the primary radius Rlowast and orbital semimajor axis a, and assuming small companions and a circular orbit. However, the posterior transit probability depends not only on the prior probability distribution of i but also on the prior probability distribution of the companion mass Mc, given a measurement of the product of the two (the minimum mass Mc sin i) from an RV signal. In general, the posterior can be larger or smaller than the prior transit probability. We derive analytic expressions for the posterior transit probability assuming a power-law form for the distribution of true masses, dΓ/dMcvpropMcα, for integer values -3 <= α <= 3. We show that for low transit probabilities, these probabilities reduce to a constant multiplicative factor fα of the corresponding prior transit probability, where fα in general depends on α and an assumed upper limit on the true mass. The prior and posterior probabilities are equal for α = -1. The posterior transit probability is ~1.5 times larger than the prior for α = -3 and is ~4/π times larger for α = -2, but is less than the prior for α>=0, and can be arbitrarily small for α > 1. We also calculate the posterior transit probability in different mass regimes for two physically-motivated mass distributions of companions around Sun-like stars. We find that for Jupiter-mass planets, the posterior transit probability is roughly equal to the prior probability, whereas the posterior is likely higher for Super-Earths and Neptunes (10 M⊕ - 30 M⊕) and Super-Jupiters (3 MJup - 10 MJup), owing to the predicted steep rise in the mass function toward smaller

  7. The absolute path command

    Energy Science and Technology Software Center (ESTSC)

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it canmore » provide the absolute path to a relative directory from the current working directory.« less

  8. The absolute path command

    SciTech Connect

    Moody, A.

    2012-05-11

    The ap command traveres all symlinks in a given file, directory, or executable name to identify the final absolute path. It can print just the final path, each intermediate link along with the symlink chan, and the permissions and ownership of each directory component in the final path. It has functionality similar to "which", except that it shows the final path instead of the first path. It is also similar to "pwd", but it can provide the absolute path to a relative directory from the current working directory.

  9. Introducing a Method for Calculating the Allocation of Attention in a Cognitive “Two-Armed Bandit” Procedure: Probability Matching Gives Way to Maximizing

    PubMed Central

    Heyman, Gene M.; Grisanzio, Katherine A.; Liang, Victor

    2016-01-01

    We tested whether principles that describe the allocation of overt behavior, as in choice experiments, also describe the allocation of cognition, as in attention experiments. Our procedure is a cognitive version of the “two-armed bandit choice procedure.” The two-armed bandit procedure has been of interest to psychologistsand economists because it tends to support patterns of responding that are suboptimal. Each of two alternatives provides rewards according to fixed probabilities. The optimal solution is to choose the alternative with the higher probability of reward on each trial. However, subjects often allocate responses so that the probability of a response approximates its probability of reward. Although it is this result which has attracted most interest, probability matching is not always observed. As a function of monetary incentives, practice, and individual differences, subjects tend to deviate from probability matching toward exclusive preference, as predicted by maximizing. In our version of the two-armed bandit procedure, the monitor briefly displayed two, small adjacent stimuli that predicted correct responses according to fixed probabilities, as in a two-armed bandit procedure. We show that in this setting, a simple linear equation describes the relationship between attention and correct responses, and that the equation’s solution is the allocation of attention between the two stimuli. The calculations showed that attention allocation varied as a function of the degree to which the stimuli predicted correct responses. Linear regression revealed a strong correlation (r = 0.99) between the predictiveness of a stimulus and the probability of attending to it. Nevertheless there were deviations from probability matching, and although small, they were systematic and statistically significant. As in choice studies, attention allocation deviated toward maximizing as a function of practice, feedback, and incentives. Our approach also predicts the

  10. Knowledge typology for imprecise probabilities.

    SciTech Connect

    Wilson, G. D.; Zucker, L. J.

    2002-01-01

    When characterizing the reliability of a complex system there are often gaps in the data available for specific subsystems or other factors influencing total system reliability. At Los Alamos National Laboratory we employ ethnographic methods to elicit expert knowledge when traditional data is scarce. Typically, we elicit expert knowledge in probabilistic terms. This paper will explore how we might approach elicitation if methods other than probability (i.e., Dempster-Shafer, or fuzzy sets) prove more useful for quantifying certain types of expert knowledge. Specifically, we will consider if experts have different types of knowledge that may be better characterized in ways other than standard probability theory.

  11. Tackling higher derivative ghosts with the Euclidean path integral

    SciTech Connect

    Fontanini, Michele; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in the most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.

  12. Path integral density matrix dynamics: A method for calculating time-dependent properties in thermal adiabatic and non-adiabatic systems

    SciTech Connect

    Habershon, Scott

    2013-09-14

    We introduce a new approach for calculating quantum time-correlation functions and time-dependent expectation values in many-body thermal systems; both electronically adiabatic and non-adiabatic cases can be treated. Our approach uses a path integral simulation to sample an initial thermal density matrix; subsequent evolution of this density matrix is equivalent to solution of the time-dependent Schrödinger equation, which we perform using a linear expansion of Gaussian wavepacket basis functions which evolve according to simple classical-like trajectories. Overall, this methodology represents a formally exact approach for calculating time-dependent quantum properties; by introducing approximations into both the imaginary-time and real-time propagations, this approach can be adapted for complex many-particle systems interacting through arbitrary potentials. We demonstrate this method for the spin Boson model, where we find good agreement with numerically exact calculations. We also discuss future directions of improvement for our approach with a view to improving accuracy and efficiency.

  13. Up-scaling methods of greenhouse gas fluxes between the soil and the atmosphere using a measuring tunnel as well as open-path measurement techniques for the flux-gradient method

    NASA Astrophysics Data System (ADS)

    Schäfer, K.; Jahn, C.; Emeis, S.; Wiwiorra, M.; von der Heide, C.; Böttcher, J.; Deurer, M.; Weymann, D.; Schleichardt, A.; Raabe, A.

    2009-09-01

    For up-scaling the emissions of N2O, CO2 and CH4 (GHG) from arable field soils a measuring tunnel for controlled enrichment of released gases was installed at the soil surface covering an area of 495 or 306 m2. The concentrations of GHG and humidity were measured by the path-averaging, multi-component Fourier Transform Infrared (FTIR) absorption spectrometry at an open path of 100 m length across the whole measuring tunnel. During a 2-years-time frame the N2O fluxes between the soil and the atmosphere at the agricultural field varied between 1.0 and 21 µg N2O-N m-2 h-1. These results were compared with N2O emission rates that were simultaneously measured with a conventional closed chamber technique. The resulting N2O fluxes between the soil and the atmosphere of both methods had the same order of magnitude. However, we found an extreme spatial variability of N2O fluxes at the scale of the closed chambers. The hypothesis that an enlargement of the measured soil surface area is an appropriate measure to avoid the problems of up-scaling results of small scale chamber measurements was confirmed by the results obtained with the measuring tunnel. Currently, a non-intrusive emission and flux measurement method at a scale from 100 m up to. 27.000 m2 on the basis of the flux-gradient method (0.50 and 2.70 m height above surface) is developed and tested by means of open-path multi-component measurement methods (FTIR, GHG) and area averaging meteorological measurements (determination of horizontal winds, friction velocity using acoustic tomography). Two campaigns in October 2007 and June 2008 were performed with this new methodology when wind speeds were low. Due to the very low wind speeds and insufficient turbulence for the application of the usual flux-gradient method a new concept introducing the viscosity instead of stability corrections was developed. It requires a direct measurement of the friction velocity and the vertical gradient of the horizontal wind speeds by

  14. Path Analysis: A Link between Family Theory and Reseach.

    ERIC Educational Resources Information Center

    Rank, Mark R.; Sabatelli, Ronald M.

    This paper discusses path analysis and the applicability of this methodology to the field of family studies. The statistical assumptions made in path analysis are presented along with a description of the two types of models within path analysis, i.e., recursive and non-recursive. Methods of calculating in the path model and the advantages of…

  15. Single-case probabilities

    NASA Astrophysics Data System (ADS)

    Miller, David

    1991-12-01

    The propensity interpretation of probability, bred by Popper in 1957 (K. R. Popper, in Observation and Interpretation in the Philosophy of Physics, S. Körner, ed. (Butterworth, London, 1957, and Dover, New York, 1962), p. 65; reprinted in Popper Selections, D. W. Miller, ed. (Princeton University Press, Princeton, 1985), p. 199) from pure frequency stock, is the only extant objectivist account that provides any proper understanding of single-case probabilities as well as of probabilities in ensembles and in the long run. In Sec. 1 of this paper I recall salient points of the frequency interpretations of von Mises and of Popper himself, and in Sec. 2 I filter out from Popper's numerous expositions of the propensity interpretation its most interesting and fertile strain. I then go on to assess it. First I defend it, in Sec. 3, against recent criticisms (P. Humphreys, Philos. Rev. 94, 557 (1985); P. Milne, Erkenntnis 25, 129 (1986)) to the effect that conditional [or relative] probabilities, unlike absolute probabilities, can only rarely be made sense of as propensities. I then challenge its predominance, in Sec. 4, by outlining a rival theory: an irreproachably objectivist theory of probability, fully applicable to the single case, that interprets physical probabilities as instantaneous frequencies.

  16. Probability with Roulette

    ERIC Educational Resources Information Center

    Marshall, Jennings B.

    2007-01-01

    This article describes how roulette can be used to teach basic concepts of probability. Various bets are used to illustrate the computation of expected value. A betting system shows variations in patterns that often appear in random events.

  17. Launch Collision Probability

    NASA Technical Reports Server (NTRS)

    Bollenbacher, Gary; Guptill, James D.

    1999-01-01

    This report analyzes the probability of a launch vehicle colliding with one of the nearly 10,000 tracked objects orbiting the Earth, given that an object on a near-collision course with the launch vehicle has been identified. Knowledge of the probability of collision throughout the launch window can be used to avoid launching at times when the probability of collision is unacceptably high. The analysis in this report assumes that the positions of the orbiting objects and the launch vehicle can be predicted as a function of time and therefore that any tracked object which comes close to the launch vehicle can be identified. The analysis further assumes that the position uncertainty of the launch vehicle and the approaching space object can be described with position covariance matrices. With these and some additional simplifying assumptions, a closed-form solution is developed using two approaches. The solution shows that the probability of collision is a function of position uncertainties, the size of the two potentially colliding objects, and the nominal separation distance at the point of closest approach. ne impact of the simplifying assumptions on the accuracy of the final result is assessed and the application of the results to the Cassini mission, launched in October 1997, is described. Other factors that affect the probability of collision are also discussed. Finally, the report offers alternative approaches that can be used to evaluate the probability of collision.

  18. Comparison of an automated Most Probable Number (MPN) technique to traditional plating methods for estimating populations of total aerobes, coliforms and E. coli associated with freshly processed broiler chickens

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recently, an instrument (TEMPOTM) has been developed to automate the Most Probable Number (MPN) technique and reduce the effort required to estimate some bacterial populations. We compared the automated MPN technique to traditional microbiological plating methods or PetrifilmTM for estimating the t...

  19. Identifying decohering paths in closed quantum systems

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1990-01-01

    A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

  20. A novel method for patient exit and entrance dose prediction based on water equivalent path length measured with an amorphous silicon electronic portal imaging device.

    PubMed

    Kavuma, Awusi; Glegg, Martin; Metwaly, Mohamed; Currie, Garry; Elliott, Alex

    2010-01-21

    In vivo dosimetry is one of the quality assurance tools used in radiotherapy to monitor the dose delivered to the patient. Electronic portal imaging device (EPID) images for a set of solid water phantoms of varying thicknesses were acquired and the data fitted onto a quadratic equation, which relates the reduction in photon beam intensity to the attenuation coefficient and material thickness at a reference condition. The quadratic model is used to convert the measured grey scale value into water equivalent path length (EPL) at each pixel for any material imaged by the detector. For any other non-reference conditions, scatter, field size and MU variation effects on the image were corrected by relative measurements using an ionization chamber and an EPID. The 2D EPL is linked to the percentage exit dose table, for different thicknesses and field sizes, thereby converting the plane pixel values at each point into a 2D dose map. The off-axis ratio is corrected using envelope and boundary profiles generated from the treatment planning system (TPS). The method requires field size, monitor unit and source-to-surface distance (SSD) as clinical input parameters to predict the exit dose, which is then used to determine the entrance dose. The measured pixel dose maps were compared with calculated doses from TPS for both entrance and exit depth of phantom. The gamma index at 3% dose difference (DD) and 3 mm distance to agreement (DTA) resulted in an average of 97% passing for the square fields of 5, 10, 15 and 20 cm. The exit dose EPID dose distributions predicted by the algorithm were in better agreement with TPS-calculated doses than phantom entrance dose distributions. PMID:20019398

  1. A novel method for patient exit and entrance dose prediction based on water equivalent path length measured with an amorphous silicon electronic portal imaging device

    NASA Astrophysics Data System (ADS)

    Kavuma, Awusi; Glegg, Martin; Metwaly, Mohamed; Currie, Garry; Elliott, Alex

    2010-01-01

    In vivo dosimetry is one of the quality assurance tools used in radiotherapy to monitor the dose delivered to the patient. Electronic portal imaging device (EPID) images for a set of solid water phantoms of varying thicknesses were acquired and the data fitted onto a quadratic equation, which relates the reduction in photon beam intensity to the attenuation coefficient and material thickness at a reference condition. The quadratic model is used to convert the measured grey scale value into water equivalent path length (EPL) at each pixel for any material imaged by the detector. For any other non-reference conditions, scatter, field size and MU variation effects on the image were corrected by relative measurements using an ionization chamber and an EPID. The 2D EPL is linked to the percentage exit dose table, for different thicknesses and field sizes, thereby converting the plane pixel values at each point into a 2D dose map. The off-axis ratio is corrected using envelope and boundary profiles generated from the treatment planning system (TPS). The method requires field size, monitor unit and source-to-surface distance (SSD) as clinical input parameters to predict the exit dose, which is then used to determine the entrance dose. The measured pixel dose maps were compared with calculated doses from TPS for both entrance and exit depth of phantom. The gamma index at 3% dose difference (DD) and 3 mm distance to agreement (DTA) resulted in an average of 97% passing for the square fields of 5, 10, 15 and 20 cm. The exit dose EPID dose distributions predicted by the algorithm were in better agreement with TPS-calculated doses than phantom entrance dose distributions.

  2. New emission factors for Australian vegetation fires measured using open-path Fourier transform infrared spectroscopy - Part 1: Methods and Australian temperate forest fires

    NASA Astrophysics Data System (ADS)

    Paton-Walsh, C.; Smith, T. E. L.; Young, E. L.; Griffith, D. W. T.; Guérette, É.-A.

    2014-10-01

    Biomass burning releases trace gases and aerosol particles that significantly affect the composition and chemistry of the atmosphere. Australia contributes approximately 8% of gross global carbon emissions from biomass burning, yet there are few previous measurements of emissions from Australian forest fires available in the literature. This paper describes the results of field measurements of trace gases emitted during hazard reduction burns in Australian temperate forests using open-path Fourier transform infrared spectroscopy. In a companion paper, similar techniques are used to characterise the emissions from hazard reduction burns in the savanna regions of the Northern Territory. Details of the experimental methods are explained, including both the measurement set-up and the analysis techniques employed. The advantages and disadvantages of different ways to estimate whole-fire emission factors are discussed and a measurement uncertainty budget is developed. Emission factors for Australian temperate forest fires are measured locally for the first time for many trace gases. Where ecosystem-relevant data are required, we recommend the following emission factors for Australian temperate forest fires (in grams of gas emitted per kilogram of dry fuel burned) which are our mean measured values: 1620 ± 160 g kg-1 of carbon dioxide; 120 ± 20 g kg-1 of carbon monoxide; 3.6 ± 1.1 g kg-1 of methane; 1.3 ± 0.3 g kg-1 of ethylene; 1.7 ± 0.4 g kg-1 of formaldehyde; 2.4 ± 1.2 g kg-1 of methanol; 3.8 ± 1.3 g kg-1 of acetic acid; 0.4 ± 0.2 g kg-1 of formic acid; 1.6 ± 0.6 g kg-1 of ammonia; 0.15 ± 0.09 g kg-1 of nitrous oxide and 0.5 ± 0.2 g kg-1 of ethane.

  3. Multiple paths in complex tasks

    NASA Technical Reports Server (NTRS)

    Galanter, Eugene; Wiegand, Thomas; Mark, Gloria

    1987-01-01

    The relationship between utility judgments of subtask paths and the utility of the task as a whole was examined. The convergent validation procedure is based on the assumption that measurements of the same quantity done with different methods should covary. The utility measures of the subtasks were obtained during the performance of an aircraft flight controller navigation task. Analyses helped decide among various models of subtask utility combination, whether the utility ratings of subtask paths predict the whole tasks utility rating, and indirectly, whether judgmental models need to include the equivalent of cognitive noise.

  4. Path coloring on the Mesh

    SciTech Connect

    Rabani, Y.

    1996-12-31

    In the minimum path coloring problem, we are given a list of pairs of vertices of a graph. We are asked to connect each pair by a colored path. Paths of the same color must be edge disjoint. Our objective is to minimize the number of colors used. This problem was raised by Aggarwal et al and Raghavan and Upfal as a model for routing in all-optical networks. It is also related to questions in circuit routing. In this paper, we improve the O (ln N) approximation result of Kleinberg and Tardos for path coloring on the N x N mesh. We give an O(1) approximation algorithm to the number of colors needed, and a poly(ln ln N) approximation algorithm to the choice of paths and colors. To the best of our knowledge, these are the first sub-logarithmic bounds for any network other than trees, rings, or trees of rings. Our results are based on developing new techniques for randomized rounding. These techniques iteratively improve a fractional solution until it approaches integrality. They are motivated by the method used by Leighton, Maggs, and Rao for packet routing.

  5. Two Paths Diverged: Exploring Trajectories, Protocols, and Dynamic Phases

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd Robert

    Using tools of statistical mechanics, it is routine to average over the distribution of microscopic configurations to obtain equilibrium free energies. These free energies teach us about the most likely molecular arrangements and the probability of observing deviations from the norm. Frequently, it is necessary to interrogate the probability not just of static arrangements, but of dynamical events, in which case analogous statistical mechanical tools may be applied to study the distribution of molecular trajectories. Numerical study of these trajectory spaces requires algorithms which efficiently sample the possible trajectories. We study in detail one such Monte Carlo algorithm, transition path sampling, and use a non- equilibrium statistical mechanical perspective to illuminate why the algorithm cannot easily be adapted to study some problems involving long-timescale dynamics. Algorithmically generating highly-correlated trajectories, a necessity for transition path sampling, grows exponentially more challenging for longer trajectories unless the dynamics is strongly-guided by the "noise history", the sequence of random numbers representing the noise terms in the stochastic dynamics. Langevin dynamics of Weeks-Chandler-Andersen (WCA) particles in two dimensions lacks this strong noise guidance, so it is challenging to use transition path sampling to study rare dynamical events in long trajectories of WCA particles. The spin flip dynamics of a two-dimensional Ising model, on the other hand, can be guided by the noise history to achieve efficient path sampling. For systems that can be efficiently sampled with path sampling, we show that it is possible to simultaneously sample both the paths and the (potentially vast) space of non-equilibrium protocols to efficiently learn how rate constants vary with protocols and to identify low-dissipation protocols. When high-dimensional molecular dynamics can be coarse-grained and represented by a simplified dynamics on a low

  6. Experimental Probability in Elementary School

    ERIC Educational Resources Information Center

    Andrew, Lane

    2009-01-01

    Concepts in probability can be more readily understood if students are first exposed to probability via experiment. Performing probability experiments encourages students to develop understandings of probability grounded in real events, as opposed to merely computing answers based on formulae.

  7. Acceptance, values, and probability.

    PubMed

    Steel, Daniel

    2015-10-01

    This essay makes a case for regarding personal probabilities used in Bayesian analyses of confirmation as objects of acceptance and rejection. That in turn entails that personal probabilities are subject to the argument from inductive risk, which aims to show non-epistemic values can legitimately influence scientific decisions about which hypotheses to accept. In a Bayesian context, the argument from inductive risk suggests that value judgments can influence decisions about which probability models to accept for likelihoods and priors. As a consequence, if the argument from inductive risk is sound, then non-epistemic values can affect not only the level of evidence deemed necessary to accept a hypothesis but also degrees of confirmation themselves. PMID:26386533

  8. Deciphering P-T paths in metamorphic rocks involving zoned minerals using quantified maps (XMapTools software) and thermodynamics methods: Examples from the Alps and the Himalaya.

    NASA Astrophysics Data System (ADS)

    Lanari, P.; Vidal, O.; Schwartz, S.; Riel, N.; Guillot, S.; Lewin, E.

    2012-04-01

    Metamorphic rocks are made by mosaic of local thermodynamic equilibria involving minerals that grew at different temporal, pressure (P) and temperature (T) conditions. These local (in space but also in time) equilibria can be identified using micro-structural and textural criteria, but also tested using multi-equilibrium techniques. However, linking deformation with metamorphic conditions requires spatially continuous estimates of P and T conditions in least two dimensions (P-T maps), which can be superimposed to the observed structures of deformation. To this end, we have developed a new Matlab-based GUI software for microprobe X-ray map processing (XMapTools, http://www.xmaptools.com) based on the quantification method of De Andrade et al. (2006). XMapTools software includes functions for quantification processing, two chemical modules (Chem2D, Triplot3D), the structural formula functions for common minerals, and more than 50 empirical and semi-empirical geothermobarometers obtained from the literature. XMapTools software can be easily coupled with multi-equilibrium thermobarometric calculations. We will present examples of application for two natural cases involving zoned minerals. The first example is a low-grade metapelite from the paleo-subduction wedge in the Western Alps (Schistes Lustrés unit) that contains only both zoned chlorite and phengite, and also quartz. The second sample is a Himalayan eclogite from the high-pressure unit of Stak (Pakistan) with an eclogitic garnet-omphacite assemblage retrogressed into clinopyroxene-plagioclase-amphibole symplectite, and later into amphibole-biotite during the collisional event under crustal conditions. In both samples, P-T paths were recovered using multi-equilibrium, or semi-empirical geothermobarometers included in the XMapTools package. The results will be compared and discussed with pseudosections calculated with the sample bulk composition and with different local bulk rock compositions estimated with XMap

  9. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SPECTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a rapid and cost effective methodology developed to estimate emissions factors of organic compounds from a variety of area sources. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations of hydrocarb...

  10. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SEPCTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...

  11. The Path of Carbon in Photosynthesis VI.

    DOE R&D Accomplishments Database

    Calvin, M.

    1949-06-30

    This paper is a compilation of the essential results of our experimental work in the determination of the path of carbon in photosynthesis. There are discussions of the dark fixation of photosynthesis and methods of separation and identification including paper chromatography and radioautography. The definition of the path of carbon in photosynthesis by the distribution of radioactivity within the compounds is described.

  12. Squeezed states and path integrals

    NASA Technical Reports Server (NTRS)

    Daubechies, Ingrid; Klauder, John R.

    1992-01-01

    The continuous-time regularization scheme for defining phase-space path integrals is briefly reviewed as a method to define a quantization procedure that is completely covariant under all smooth canonical coordinate transformations. As an illustration of this method, a limited set of transformations is discussed that have an image in the set of the usual squeezed states. It is noteworthy that even this limited set of transformations offers new possibilities for stationary phase approximations to quantum mechanical propagators.

  13. Varga: On Probability.

    ERIC Educational Resources Information Center

    Varga, Tamas

    This booklet resulted from a 1980 visit by the author, a Hungarian mathematics educator, to the Teachers' Center Project at Southern Illinois University at Edwardsville. Included are activities and problems that make probablility concepts accessible to young children. The topics considered are: two probability games; choosing two beads; matching…

  14. Application of Quantum Probability

    NASA Astrophysics Data System (ADS)

    Bohdalová, Mária; Kalina, Martin; Nánásiová, Ol'ga

    2009-03-01

    This is the first attempt to smooth time series using estimators with applying quantum probability with causality (non-commutative s-maps on an othomodular lattice). In this context it means that we use non-symmetric covariance matrix to construction of our estimator.

  15. Univariate Probability Distributions

    ERIC Educational Resources Information Center

    Leemis, Lawrence M.; Luckett, Daniel J.; Powell, Austin G.; Vermeer, Peter E.

    2012-01-01

    We describe a web-based interactive graphic that can be used as a resource in introductory classes in mathematical statistics. This interactive graphic presents 76 common univariate distributions and gives details on (a) various features of the distribution such as the functional form of the probability density function and cumulative distribution…

  16. Dynamic probability estimator for machine learning.

    PubMed

    Starzyk, Janusz A; Wang, Feng

    2004-03-01

    An efficient algorithm for dynamic estimation of probabilities without division on unlimited number of input data is presented. The method estimates probabilities of the sampled data from the raw sample count, while keeping the total count value constant. Accuracy of the estimate depends on the counter size, rather than on the total number of data points. Estimator follows variations of the incoming data probability within a fixed window size, without explicit implementation of the windowing technique. Total design area is very small and all probabilities are estimated concurrently. Dynamic probability estimator was implemented using a programmable gate array from Xilinx. The performance of this implementation is evaluated in terms of the area efficiency and execution time. This method is suitable for the highly integrated design of artificial neural networks where a large number of dynamic probability estimators can work concurrently. PMID:15384523

  17. Precise estimation of pressure-temperature paths from zoned minerals using Markov random field modeling: theory and synthetic inversion

    NASA Astrophysics Data System (ADS)

    Kuwatani, Tatsu; Nagata, Kenji; Okada, Masato; Toriumi, Mitsuhiro

    2012-03-01

    The chemical zoning profile in metamorphic minerals is often used to deduce the pressure-temperature ( P- T) history of rock. However, it remains difficult to restore detailed paths from zoned minerals because thermobarometric evaluation of metamorphic conditions involves several uncertainties, including measurement errors and geological noise. We propose a new stochastic framework for estimating precise P- T paths from a chemical zoning structure using the Markov random field (MRF) model, which is a type of Bayesian stochastic method that is often applied to image analysis. The continuity of pressure and temperature during mineral growth is incorporated by Gaussian Markov chains as prior probabilities in order to apply the MRF model to the P- T path inversion. The most probable P- T path can be obtained by maximizing the posterior probability of the sequential set of P and T given the observed compositions of zoned minerals. Synthetic P- T inversion tests were conducted in order to investigate the effectiveness and validity of the proposed model from zoned Mg-Fe-Ca garnet in the divariant KNCFMASH system. In the present study, the steepest descent method was implemented in order to maximize the posterior probability using the Markov chain Monte Carlo algorithm. The proposed method successfully reproduced the detailed shape of the synthetic P- T path by eliminating appropriately the statistical compositional noises without operator's subjectivity and prior knowledge. It was also used to simultaneously evaluate the uncertainty of pressure, temperature, and mineral compositions for all measurement points. The MRF method may have potential to deal with several geological uncertainties, which cause cumbersome systematic errors, by its Bayesian approach and flexible formalism, so that it comprises potentially powerful tools for various inverse problems in petrology.

  18. The Objective Borderline Method (OBM): A Probability-Based Model for Setting up an Objective Pass/Fail Cut-Off Score in Medical Programme Assessments

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-01-01

    The decision to pass or fail a medical student is a "high stakes" one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the…

  19. Waste Package Misload Probability

    SciTech Connect

    J.K. Knudsen

    2001-11-20

    The objective of this calculation is to calculate the probability of occurrence for fuel assembly (FA) misloads (i.e., Fa placed in the wrong location) and FA damage during FA movements. The scope of this calculation is provided by the information obtained from the Framatome ANP 2001a report. The first step in this calculation is to categorize each fuel-handling events that occurred at nuclear power plants. The different categories are based on FAs being damaged or misloaded. The next step is to determine the total number of FAs involved in the event. Using the information, a probability of occurrence will be calculated for FA misload and FA damage events. This calculation is an expansion of preliminary work performed by Framatome ANP 2001a.

  20. Mapped interpolation scheme for single-point energy corrections in reaction rate calculations and a critical evaluation of dual-level reaction path dynamics methods

    SciTech Connect

    Chuang, Y.Y.; Truhlar, D.G.; Corchado, J.C.

    1999-02-25

    Three procedures for incorporating higher level electronic structure data into reaction path dynamics calculations are tested. In one procedure, variational transition state theory with interpolated single-point energies, which is denoted VTST-ISPE, a few extra energies calculated with a higher level theory along the lower level reaction path are used to correct the classical energetic profile of the reaction. In the second procedure, denoted variational transition state theory with interpolated optimized corrections (VTST-IOC), which the authors introduced earlier, higher level corrections to energies, frequencies, and moments of inertia are based on stationary-point geometries reoptimized at a higher level than the reaction path was calculated. The third procedure, called interpolated optimized energies (IOE), is like IOC except it omits the frequency correction. Three hydrogen-transfer reactions, CH{sub 3} + H{prime}H {r_arrow} CH{sub 3}H{prime} + H (R1), OH + H{prime}H {r_arrow} HOH{prime} + H (R2), and OH + H{prime}CH{sub 3} {r_arrow} HOH{prime} + CH{sub 3} (R3), are used to test and validate the procedures by comparing their predictions to the reaction rate evaluated with a full variational transition state theory calculation including multidimensional tunneling (VTST/MT) at the higher level. The authors present a very efficient scheme for carrying out VTST-ISPE calculations, which are popular due to their lower computational cost. They also show, on the basis of calculations of the reactions R1--R3 with eight pairs of higher and lower levels, that VTST-IOC with higher level data only at stationary points is a more reliable dual-level procedure than VTST-ISPE with higher level energies all along the reaction path. Although the frequencies along the reaction path are not corrected in the IOE scheme, the results are still better than those from VTST-ISPE; this indicates the importance of optimizing the geometry at the highest possible level.

  1. Dynamic versus static fission paths with realistic interactions

    NASA Astrophysics Data System (ADS)

    Giuliani, Samuel A.; Robledo, Luis M.; Rodríguez-Guzmán, R.

    2014-11-01

    The properties of dynamic (least action) fission paths are analyzed and compared to the ones of the more traditional static (least energy) paths. Both the Barcelona-Catania-Paris-Madrid and Gogny D1M energy density functionals are used in the calculation of the Hartree-Fock-Bogoliubov (HFB) constrained configurations providing the potential energy and collective inertias. The action is computed as in the Wentzel-Kramers-Brillouin method. A full variational search of the least-action path over the complete variational space of HFB wave functions is cumbersome and probably unnecessary if the relevant degrees of freedom are identified. In this paper, we consider the particle number fluctuation degree of freedom that explores the amount of pairing correlations in the wave function. For a given shape, the minimum action can be up to a factor of 3 smaller than the action computed for the minimum energy state with the same shape. The impact of this reduction on the lifetimes is enormous and dramatically improves the agreement with experimental data in the few examples considered.

  2. Probability, Information and Statistical Physics

    NASA Astrophysics Data System (ADS)

    Kuzemsky, A. L.

    2016-03-01

    In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.

  3. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  4. Probability Issues in without Replacement Sampling

    ERIC Educational Resources Information Center

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  5. Experience Matters: Information Acquisition Optimizes Probability Gain

    PubMed Central

    Nelson, Jonathan D.; McKenzie, Craig R.M.; Cottrell, Garrison W.; Sejnowski, Terrence J.

    2010-01-01

    Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information—information gain, Kullback-Liebler distance, probability gain (error minimization), and impact—are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects’ information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects’ preference for probability gain is robust, suggesting that the other models contribute little to subjects’ search behavior. PMID:20525915

  6. Experience matters: information acquisition optimizes probability gain.

    PubMed

    Nelson, Jonathan D; McKenzie, Craig R M; Cottrell, Garrison W; Sejnowski, Terrence J

    2010-07-01

    Deciding which piece of information to acquire or attend to is fundamental to perception, categorization, medical diagnosis, and scientific inference. Four statistical theories of the value of information-information gain, Kullback-Liebler distance, probability gain (error minimization), and impact-are equally consistent with extant data on human information acquisition. Three experiments, designed via computer optimization to be maximally informative, tested which of these theories best describes human information search. Experiment 1, which used natural sampling and experience-based learning to convey environmental probabilities, found that probability gain explained subjects' information search better than the other statistical theories or the probability-of-certainty heuristic. Experiments 1 and 2 found that subjects behaved differently when the standard method of verbally presented summary statistics (rather than experience-based learning) was used to convey environmental probabilities. Experiment 3 found that subjects' preference for probability gain is robust, suggesting that the other models contribute little to subjects' search behavior. PMID:20525915

  7. Probability mapping of contaminants

    SciTech Connect

    Rautman, C.A.; Kaplan, P.G.; McGraw, M.A.; Istok, J.D.; Sigda, J.M.

    1994-04-01

    Exhaustive characterization of a contaminated site is a physical and practical impossibility. Descriptions of the nature, extent, and level of contamination, as well as decisions regarding proposed remediation activities, must be made in a state of uncertainty based upon limited physical sampling. The probability mapping approach illustrated in this paper appears to offer site operators a reasonable, quantitative methodology for many environmental remediation decisions and allows evaluation of the risk associated with those decisions. For example, output from this approach can be used in quantitative, cost-based decision models for evaluating possible site characterization and/or remediation plans, resulting in selection of the risk-adjusted, least-cost alternative. The methodology is completely general, and the techniques are applicable to a wide variety of environmental restoration projects. The probability-mapping approach is illustrated by application to a contaminated site at the former DOE Feed Materials Production Center near Fernald, Ohio. Soil geochemical data, collected as part of the Uranium-in-Soils Integrated Demonstration Project, have been used to construct a number of geostatistical simulations of potential contamination for parcels approximately the size of a selective remediation unit (the 3-m width of a bulldozer blade). Each such simulation accurately reflects the actual measured sample values, and reproduces the univariate statistics and spatial character of the extant data. Post-processing of a large number of these equally likely statistically similar images produces maps directly showing the probability of exceeding specified levels of contamination (potential clean-up or personnel-hazard thresholds).

  8. The Logic Behind Feynman's Paths

    NASA Astrophysics Data System (ADS)

    García Álvarez, Edgardo T.

    The classical notions of continuity and mechanical causality are left in order to reformulate the Quantum Theory starting from two principles: (I) the intrinsic randomness of quantum process at microphysical level, (II) the projective representations of symmetries of the system. The second principle determines the geometry and then a new logic for describing the history of events (Feynman's paths) that modifies the rules of classical probabilistic calculus. The notion of classical trajectory is replaced by a history of spontaneous, random and discontinuous events. So the theory is reduced to determining the probability distribution for such histories accordingly with the symmetries of the system. The representation of the logic in terms of amplitudes leads to Feynman rules and, alternatively, its representation in terms of projectors results in the Schwinger trace formula.

  9. A Path to Discovery

    ERIC Educational Resources Information Center

    Stegemoller, William; Stegemoller, Rebecca

    2004-01-01

    The path taken and the turns made as a turtle traces a polygon are examined to discover an important theorem in geometry. A unique tool, the Angle Adder, is implemented in the investigation. (Contains 9 figures.)

  10. Tortuous path chemical preconcentrator

    DOEpatents

    Manginell, Ronald P.; Lewis, Patrick R.; Adkins, Douglas R.; Wheeler, David R.; Simonson, Robert J.

    2010-09-21

    A non-planar, tortuous path chemical preconcentrator has a high internal surface area having a heatable sorptive coating that can be used to selectively collect and concentrate one or more chemical species of interest from a fluid stream that can be rapidly released as a concentrated plug into an analytical or microanalytical chain for separation and detection. The non-planar chemical preconcentrator comprises a sorptive support structure having a tortuous flow path. The tortuosity provides repeated twists, turns, and bends to the flow, thereby increasing the interfacial contact between sample fluid stream and the sorptive material. The tortuous path also provides more opportunities for desorption and readsorption of volatile species. Further, the thermal efficiency of the tortuous path chemical preconcentrator is comparable or superior to the prior non-planar chemical preconcentrator. Finally, the tortuosity can be varied in different directions to optimize flow rates during the adsorption and desorption phases of operation of the preconcentrator.

  11. Hamiltonian formalism and path entropy maximization

    NASA Astrophysics Data System (ADS)

    Davis, Sergio; González, Diego

    2015-10-01

    Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.

  12. Analysis and Monte Carlo simulation of near-terminal aircraft flight paths

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Matthews, C. G.

    1982-01-01

    The flight paths of arriving and departing aircraft at an airport are stochastically represented. Radar data of the aircraft movements are used to decompose the flight paths into linear and curvilinear segments. Variables which describe the segments are derived, and the best fitting probability distributions of the variables, based on a sample of flight paths, are found. Conversely, given information on the probability distribution of the variables, generation of a random sample of flight paths in a Monte Carlo simulation is discussed. Actual flight paths at Dulles International Airport are analyzed and simulated.

  13. The Estimation of Tree Posterior Probabilities Using Conditional Clade Probability Distributions

    PubMed Central

    Larget, Bret

    2013-01-01

    In this article I introduce the idea of conditional independence of separated subtrees as a principle by which to estimate the posterior probability of trees using conditional clade probability distributions rather than simple sample relative frequencies. I describe an algorithm for these calculations and software which implements these ideas. I show that these alternative calculations are very similar to simple sample relative frequencies for high probability trees but are substantially more accurate for relatively low probability trees. The method allows the posterior probability of unsampled trees to be calculated when these trees contain only clades that are in other sampled trees. Furthermore, the method can be used to estimate the total probability of the set of sampled trees which provides a measure of the thoroughness of a posterior sample. [Bayesian phylogenetics; conditional clade distributions; improved accuracy; posterior probabilities of trees.] PMID:23479066

  14. Probability, statistics, and computational science.

    PubMed

    Beerenwinkel, Niko; Siebourg, Juliane

    2012-01-01

    In this chapter, we review basic concepts from probability theory and computational statistics that are fundamental to evolutionary genomics. We provide a very basic introduction to statistical modeling and discuss general principles, including maximum likelihood and Bayesian inference. Markov chains, hidden Markov models, and Bayesian network models are introduced in more detail as they occur frequently and in many variations in genomics applications. In particular, we discuss efficient inference algorithms and methods for learning these models from partially observed data. Several simple examples are given throughout the text, some of which point to models that are discussed in more detail in subsequent chapters. PMID:22407706

  15. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies.

    PubMed

    Mielke, Steven L; Truhlar, Donald G

    2016-01-21

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023

  16. A whole-path importance-sampling scheme for Feynman path integral calculations of absolute partition functions and free energies

    NASA Astrophysics Data System (ADS)

    Mielke, Steven L.; Truhlar, Donald G.

    2016-01-01

    Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.

  17. Path optimization for oil probe

    NASA Astrophysics Data System (ADS)

    Smith, O'Neil; Rahmes, Mark; Blue, Mark; Peter, Adrian

    2014-05-01

    We discuss a robust method for optimal oil probe path planning inspired by medical imaging. Horizontal wells require three-dimensional steering made possible by the rotary steerable capabilities of the system, which allows the hole to intersect multiple target shale gas zones. Horizontal "legs" can be over a mile long; the longer the exposure length, the more oil and natural gas is drained and the faster it can flow. More oil and natural gas can be produced with fewer wells and less surface disturbance. Horizontal drilling can help producers tap oil and natural gas deposits under surface areas where a vertical well cannot be drilled, such as under developed or environmentally sensitive areas. Drilling creates well paths which have multiple twists and turns to try to hit multiple accumulations from a single well location. Our algorithm can be used to augment current state of the art methods. Our goal is to obtain a 3D path with nodes describing the optimal route to the destination. This algorithm works with BIG data and saves cost in planning for probe insertion. Our solution may be able to help increase the energy extracted vs. input energy.

  18. Nonholonomic catheter path reconstruction using electromagnetic tracking

    NASA Astrophysics Data System (ADS)

    Lugez, Elodie; Sadjadi, Hossein; Akl, Selim G.; Fichtinger, Gabor

    2015-03-01

    Catheter path reconstruction is a necessary step in many clinical procedures, such as cardiovascular interventions and high-dose-rate brachytherapy. To overcome limitations of standard imaging modalities, electromagnetic tracking has been employed to reconstruct catheter paths. However, tracking errors pose a challenge in accurate path reconstructions. We address this challenge by means of a filtering technique incorporating the electromagnetic measurements with the nonholonomic motion constraints of the sensor inside a catheter. The nonholonomic motion model of the sensor within the catheter and the electromagnetic measurement data were integrated using an extended Kalman filter. The performance of our proposed approach was experimentally evaluated using the Ascension's 3D Guidance trakStar electromagnetic tracker. Sensor measurements were recorded during insertions of an electromagnetic sensor (model 55) along ten predefined ground truth paths. Our method was implemented in MATLAB and applied to the measurement data. Our reconstruction results were compared to raw measurements as well as filtered measurements provided by the manufacturer. The mean of the root-mean-square (RMS) errors along the ten paths was 3.7 mm for the raw measurements, and 3.3 mm with manufacturer's filters. Our approach effectively reduced the mean RMS error to 2.7 mm. Compared to other filtering methods, our approach successfully improved the path reconstruction accuracy by exploiting the sensor's nonholonomic motion constraints in its formulation. Our approach seems promising for a variety of clinical procedures involving reconstruction of a catheter path.

  19. Lineal-path function for random heterogeneous materials. II. Effect of polydispersivity

    SciTech Connect

    Lu, B. ); Torquato, S. Department of Chemical Engineering, North Carolina State University, Raleigh, North Carolina 27695-7910 )

    1992-05-15

    The lineal-path function {ital L}({ital z}) for two-phase heterogeneous media gives the probability of finding a line segment of length {ital z} wholly in one of the phases, say phase 1, when randomly thrown into the sample. The function {ital L}({ital z}) is equivalent to the area fraction of phase 1 measured from the projected image of a slab of the material of thickness {ital z} onto a plane. The lineal-path function is of interest in stereology and is an important morphological descriptor in determining the transport properties of heterogeneous media. We develop a means to represent and compute {ital L}({ital z}) for distributions of {ital D}-dimensional spheres with a polydispersivity in size, thereby extending an earlier analysis by us for monodispersed-sphere systems. Exact analytical expressions for {ital L}({ital z}) in the case of fully penetrable polydispersed spheres for arbitrary dimensionality are obtained. In the instance of totally impenetrable polydispersed spheres, we develop accurate approximations for the lineal-path function that apply over a wide range of volume fractions. The lineal-path function was found to be quite sensitive to polydispersivity for {ital D}{ge}2. We demonstrate how the measurement of the lineal-path function can yield the particle-size distribution of the particulate system, thus establishing a method to obtain the latter quantity.

  20. Assessment of Approximate Coupled-Cluster and Algebraic-Diagrammatic-Construction Methods for Ground- and Excited-State Reaction Paths and the Conical-Intersection Seam of a Retinal-Chromophore Model.

    PubMed

    Tuna, Deniz; Lefrancois, Daniel; Wolański, Łukasz; Gozem, Samer; Schapiro, Igor; Andruniów, Tadeusz; Dreuw, Andreas; Olivucci, Massimo

    2015-12-01

    As a minimal model of the chromophore of rhodopsin proteins, the penta-2,4-dieniminium cation (PSB3) poses a challenging test system for the assessment of electronic-structure methods for the exploration of ground- and excited-state potential-energy surfaces, the topography of conical intersections, and the dimensionality (topology) of the branching space. Herein, we report on the performance of the approximate linear-response coupled-cluster method of second order (CC2) and the algebraic-diagrammatic-construction scheme of the polarization propagator of second and third orders (ADC(2) and ADC(3)). For the ADC(2) method, we considered both the strict and extended variants (ADC(2)-s and ADC(2)-x). For both CC2 and ADC methods, we also tested the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) variants. We have explored several ground- and excited-state reaction paths, a circular path centered around the S1/S0 surface crossing, and a 2D scan of the potential-energy surfaces along the branching space. We find that the CC2 and ADC methods yield a different dimensionality of the intersection space. While the ADC methods yield a linear intersection topology, we find a conical intersection topology for the CC2 method. We present computational evidence showing that the linear-response CC2 method yields a surface crossing between the reference state and the first response state featuring characteristics that are expected for a true conical intersection. Finally, we test the performance of these methods for the approximate geometry optimization of the S1/S0 minimum-energy conical intersection and compare the geometries with available data from multireference methods. The present study provides new insight into the performance of linear-response CC2 and polarization-propagator ADC methods for molecular electronic spectroscopy and applications in computational photochemistry. PMID:26642989

  1. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy

    PubMed Central

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-01-01

    Background and Purpose Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Material and Methods Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. Results The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. Conclusions The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. PMID:27240717

  2. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGESBeta

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; Neary, Vincent S.

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (Hs) and either energy period (Te) or peak period (Tp) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmental contours. This papermore » develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  3. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  4. Sampling diffusive transition paths

    SciTech Connect

    F. Miller III, Thomas; Predescu, Cristian

    2006-10-12

    We address the problem of sampling double-ended diffusive paths. The ensemble of paths is expressed using a symmetric version of the Onsager-Machlup formula, which only requires evaluation of the force field and which, upon direct time discretization, gives rise to a symmetric integrator that is accurate to second order. Efficiently sampling this ensemble requires avoiding the well-known stiffness problem associated with sampling infinitesimal Brownian increments of the path, as well as a different type of stiffness associated with sampling the coarse features of long paths. The fine-features sampling stiffness is eliminated with the use of the fast sampling algorithm (FSA), and the coarse-feature sampling stiffness is avoided by introducing the sliding and sampling (S&S) algorithm. A key feature of the S&S algorithm is that it enables massively parallel computers to sample diffusive trajectories that are long in time. We use the algorithm to sample the transition path ensemble for the structural interconversion of the 38-atom Lennard-Jones cluster at low temperature.

  5. Sampling diffusive transition paths.

    PubMed

    Miller, Thomas F; Predescu, Cristian

    2007-04-14

    The authors address the problem of sampling double-ended diffusive paths. The ensemble of paths is expressed using a symmetric version of the Onsager-Machlup formula, which only requires evaluation of the force field and which, upon direct time discretization, gives rise to a symmetric integrator that is accurate to second order. Efficiently sampling this ensemble requires avoiding the well-known stiffness problem associated with the sampling of infinitesimal Brownian increments of the path, as well as a different type of stiffness associated with the sampling of the coarse features of long paths. The fine-feature sampling stiffness is eliminated with the use of the fast sampling algorithm, and the coarse-feature sampling stiffness is avoided by introducing the sliding and sampling (S&S) algorithm. A key feature of the S&S algorithm is that it enables massively parallel computers to sample diffusive trajectories that are long in time. The authors use the algorithm to sample the transition path ensemble for the structural interconversion of the 38-atom Lennard-Jones cluster at low temperature. PMID:17444696

  6. Assessment of physical risk factors for the shoulder using the Posture, Activity, Tools, and Handling (PATH) method in small-scale commercial crab pot fishing.

    PubMed

    Kucera, Kristen L; Lipscomb, Hester J

    2010-10-01

    An observational work-sampling technique--Posture, Activity, Tools, and Handling (PATH)--was used to describe the prevalence of awkward postures and other physical risk factors for shoulder symptoms among a purposive sample of 11 small-scale commercial crab pot fishing crews. Fishing activities with awkward shoulder postures included hooking the buoy, feeding the rope into the hydraulic puller, and handling the crab pots. Increasing the size of the crew decreased the frequency of awkward shoulder postures for the captain but not for the mate. Awkward shoulder postures varied by technique, task distribution, equipment, and boat characteristics and setup, indicating these factors may be important determinants of exposure. Care should be taken in assuming personal techniques drives ergonomic exposure variability among these small-scale commercial fishermen. PMID:20954035

  7. Application of Borehole Geophysical Methods for Assessing Agro-Chemical Flow Paths in Fractured Bedrock Underlying the Black Brook Watershed, Northwestern New Brunswick

    NASA Astrophysics Data System (ADS)

    Desroches, A.; Butler, K.

    2009-05-01

    The upper Saint John River valley represents an economically important agricultural region that suffers from high nitrate levels in the groundwater as a result of fertilizer use. This study focuses on the fractured bedrock aquifer beneath the Black Brook Watershed, near Saint-Andre (Grand Falls), New Brunswick, where prediction of nitrate migration is limited by a lack of knowledge of the bedrock fracture characteristics. Bedrock consists of a fine-grained, siliciclastic unit of the Grog Brook Group gradationally overlain by a carbonate unit assigned to the Matapédia Group. Groundwater flow through the fractured bedrock is expected to be primarily influenced by the distribution and orientation of fractures in these rock units. This study demonstrates the effectiveness of the select suite of borehole-geophysical tools used to identify and describe the fractured bedrock characteristics, and assists in understanding the migration pathways of agrochemical leachate from farm fields. Fracture datasets were acquired from five new vertical boreholes that ranged from 50 to 140 metres in depth, and from three outcrop locations along the new Trans-Canada Highway, approximately two kilometres away. The borehole-geophysical methods used included natural gamma ray (GR), single point resistance (SPR), spontaneous potential (SP), slim-hole optical borehole televiewer (OBI) and acoustic borehole televiewer (ABI). The ABI and OBI tools delivered high-resolution oriented images of the borehole walls, and enabled visualization of fractures in situ, and provided accurate information on the location, orientation, and aperture. The GR, SPR and SP logs identified changes in lithology, bed thickness and conductive fracture zones. Detailed inspection of the borehole televiewer images identified 390 fractures. Equal-area stereographic and rose diagrams of fracture planes have been used to identify three discrete fracture sets: 1) steeply dipping fractures that strike 068o/248o, with

  8. Numerical evaluation of Feynman path integrals

    NASA Astrophysics Data System (ADS)

    Baird, William Hugh

    1999-11-01

    The notion of path integration developed by Feynman, while an incredibly successful method of solving quantum mechanical problems, leads to frequently intractable integrations over an infinite number of paths. Two methods now exist which sidestep this difficulty by defining "densities" of actions which give the relative number of paths found at different values of the action. These densities are sampled by computer generation of paths and the propagators are found to a high degree of accuracy for the case of a particle on the infinite half line and in a finite square well in one dimension. The problem of propagation within a two dimensional radial well is also addressed as the precursor to the problem of a particle in a stadium (quantum billiard).

  9. Paths correlation matrix.

    PubMed

    Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang

    2015-09-15

    Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions. PMID:26371930

  10. A new, reliable, and simple-to-use method for the analysis of a population of values of a random variable using the Weibull probability distribution: application to acrylic bone cement fatigue results.

    PubMed

    Janna, Sied; Dwiggins, David P; Lewis, Gladius

    2005-01-01

    In cases where the Weibull probability distribution is being investigated as a possible fit to experimentally obtained results of a random variable (V), there is, currently, no accurate and reliable but simple-to-use method available for simultaneously (a) establishing if the fit is of the two- or three-parameter variant of the distribution, and/or (b) estimating the minimum value of the variable (V(0)), in cases where the three-parameter variant is shown to be applicable. In the present work, the details of such a method -- which uses a simple nonlinear regression analysis -- are presented, together with results of its use when applied to 4 sets of number-of-cycles-to-fracture results from fatigue tests, performed in our laboratory, using specimens fabricated from 3 different acrylic bone cement formulations. The key result of the method is that the two- or three-parameter variant of the probability distribution is applicable if the estimate of V(0) obtained is less than or greater than zero, respectively. PMID:16179755

  11. Mobile transporter path planning

    NASA Technical Reports Server (NTRS)

    Baffes, Paul; Wang, Lui

    1990-01-01

    The use of a genetic algorithm (GA) for solving the mobile transporter path planning problem is investigated. The mobile transporter is a traveling robotic vehicle proposed for the space station which must be able to reach any point of the structure autonomously. Elements of the genetic algorithm are explored in both a theoretical and experimental sense. Specifically, double crossover, greedy crossover, and tournament selection techniques are examined. Additionally, the use of local optimization techniques working in concert with the GA are also explored. Recent developments in genetic algorithm theory are shown to be particularly effective in a path planning problem domain, though problem areas can be cited which require more research.

  12. Assessment of the probability of contaminating Mars

    NASA Technical Reports Server (NTRS)

    Judd, B. R.; North, D. W.; Pezier, J. P.

    1974-01-01

    New methodology is proposed to assess the probability that the planet Mars will by biologically contaminated by terrestrial microorganisms aboard a spacecraft. Present NASA methods are based on the Sagan-Coleman formula, which states that the probability of contamination is the product of the expected microbial release and a probability of growth. The proposed new methodology extends the Sagan-Coleman approach to permit utilization of detailed information on microbial characteristics, the lethality of release and transport mechanisms, and of other information about the Martian environment. Three different types of microbial release are distinguished in the model for assessing the probability of contamination. The number of viable microbes released by each mechanism depends on the bio-burden in various locations on the spacecraft and on whether the spacecraft landing is accomplished according to plan. For each of the three release mechanisms a probability of growth is computed, using a model for transport into an environment suited to microbial growth.

  13. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images. PMID:9608471

  14. Coherence-path duality relations for N paths

    NASA Astrophysics Data System (ADS)

    Hillery, Mark; Bagan, Emilio; Bergou, Janos; Cottrell, Seth

    2016-05-01

    For an interferometer with two paths, there is a relation between the information about which path the particle took and the visibility of the interference pattern at the output. The more path information we have, the smaller the visibility, and vice versa. We generalize this relation to a multi-path interferometer, and we substitute two recently defined measures of quantum coherence for the visibility, which results in two duality relations. The path information is provided by attaching a detector to each path. In the first relation, which uses an l1 measure of coherence, the path information is obtained by applying the minimum-error state discrimination procedure to the detector states. In the second, which employs an entropic measure of coherence, the path information is the mutual information between the detector states and the result of measuring them. Both approaches are quantitative versions of complementarity for N-path interferometers. Support provided by the John Templeton Foundation.

  15. Following the Path

    ERIC Educational Resources Information Center

    Rodia, Becky

    2004-01-01

    This article profiles Diane Stanley, an author and illustrator of children's books. Although she was studying to be a medical illustrator in graduate school, Stanley's path changed when she got married and had children. As she was raising her children, she became increasingly enamored of the colorful children's books she would check out of the…

  16. An Unplanned Path

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.; Sterenberg, Gladys Y.; Long, Julie S.

    2013-01-01

    The authors elucidate what they saw as three important challenges to overcome along the path to becoming elementary school mathematics teacher leaders: marginal interest in math, low self-confidence, and teaching in isolation. To illustrate how these challenges were mitigated, they focus on the stories of two elementary school teachers--Laura and…

  17. A Tale of Two Probabilities

    ERIC Educational Resources Information Center

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  18. Improved initial guess for minimum energy path calculations.

    PubMed

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt; Jónsson, Hannes

    2014-06-01

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used. PMID:24907989

  19. Improved initial guess for minimum energy path calculations

    SciTech Connect

    Smidstrup, Søren; Pedersen, Andreas; Stokbro, Kurt

    2014-06-07

    A method is presented for generating a good initial guess of a transition path between given initial and final states of a system without evaluation of the energy. An objective function surface is constructed using an interpolation of pairwise distances at each discretization point along the path and the nudged elastic band method then used to find an optimal path on this image dependent pair potential (IDPP) surface. This provides an initial path for the more computationally intensive calculations of a minimum energy path on an energy surface obtained, for example, by ab initio or density functional theory. The optimal path on the IDPP surface is significantly closer to a minimum energy path than a linear interpolation of the Cartesian coordinates and, therefore, reduces the number of iterations needed to reach convergence and averts divergence in the electronic structure calculations when atoms are brought too close to each other in the initial path. The method is illustrated with three examples: (1) rotation of a methyl group in an ethane molecule, (2) an exchange of atoms in an island on a crystal surface, and (3) an exchange of two Si-atoms in amorphous silicon. In all three cases, the computational effort in finding the minimum energy path with DFT was reduced by a factor ranging from 50% to an order of magnitude by using an IDPP path as the initial path. The time required for parallel computations was reduced even more because of load imbalance when linear interpolation of Cartesian coordinates was used.

  20. The Probability of Causal Conditionals

    ERIC Educational Resources Information Center

    Over, David E.; Hadjichristidis, Constantinos; Evans, Jonathan St. B. T.; Handley, Simon J.; Sloman, Steven A.

    2007-01-01

    Conditionals in natural language are central to reasoning and decision making. A theoretical proposal called the Ramsey test implies the conditional probability hypothesis: that the subjective probability of a natural language conditional, P(if p then q), is the conditional subjective probability, P(q [such that] p). We report three experiments on…

  1. Communication path for extreme environments

    NASA Technical Reports Server (NTRS)

    Jorgensen, Charles C. (Inventor); Betts, Bradley J. (Inventor)

    2010-01-01

    Methods and systems for using one or more radio frequency identification devices (RFIDs), or other suitable signal transmitters and/or receivers, to provide a sensor information communication path, to provide location and/or spatial orientation information for an emergency service worker (ESW), to provide an ESW escape route, to indicate a direction from an ESW to an ES appliance, to provide updated information on a region or structure that presents an extreme environment (fire, hazardous fluid leak, underwater, nuclear, etc.) in which an ESW works, and to provide accumulated thermal load or thermal breakdown information on one or more locations in the region.

  2. Staff detection with stable paths.

    PubMed

    Dos Santos Cardoso, Jaime; Capela, Artur; Rebelo, Ana; Guedes, Carlos; Pinto da Costa, Joaquim

    2009-06-01

    The preservation of musical works produced in the past requires their digitalization and transformation into a machine-readable format. The processing of handwritten musical scores by computers remains far from ideal. One of the fundamental stages to carry out this task is the staff line detection. We investigate a general-purpose, knowledge-free method for the automatic detection of music staff lines based on a stable path approach. Lines affected by curvature, discontinuities, and inclination are robustly detected. Experimental results show that the proposed technique consistently outperforms well-established algorithms. PMID:19372615

  3. In search of a statistical probability model for petroleum-resource assessment : a critique of the probabilistic significance of certain concepts and methods used in petroleum-resource assessment : to that end, a probabilistic model is sketched

    USGS Publications Warehouse

    Grossling, Bernardo F.

    1975-01-01

    Exploratory drilling is still in incipient or youthful stages in those areas of the world where the bulk of the potential petroleum resources is yet to be discovered. Methods of assessing resources from projections based on historical production and reserve data are limited to mature areas. For most of the world's petroleum-prospective areas, a more speculative situation calls for a critical review of resource-assessment methodology. The language of mathematical statistics is required to define more rigorously the appraisal of petroleum resources. Basically, two approaches have been used to appraise the amounts of undiscovered mineral resources in a geologic province: (1) projection models, which use statistical data on the past outcome of exploration and development in the province; and (2) estimation models of the overall resources of the province, which use certain known parameters of the province together with the outcome of exploration and development in analogous provinces. These two approaches often lead to widely different estimates. Some of the controversy that arises results from a confusion of the probabilistic significance of the quantities yielded by each of the two approaches. Also, inherent limitations of analytic projection models-such as those using the logistic and Gomperts functions --have often been ignored. The resource-assessment problem should be recast in terms that provide for consideration of the probability of existence of the resource and of the probability of discovery of a deposit. Then the two above-mentioned models occupy the two ends of the probability range. The new approach accounts for (1) what can be expected with reasonably high certainty by mere projections of what has been accomplished in the past; (2) the inherent biases of decision-makers and resource estimators; (3) upper bounds that can be set up as goals for exploration; and (4) the uncertainties in geologic conditions in a search for minerals. Actual outcomes can then

  4. Quantum probability and many worlds

    NASA Astrophysics Data System (ADS)

    Hemmo, Meir; Pitowsky, Itamar

    We discuss the meaning of probabilities in the many worlds interpretation of quantum mechanics. We start by presenting very briefly the many worlds theory, how the problem of probability arises, and some unsuccessful attempts to solve it in the past. Then we criticize a recent attempt by Deutsch to derive the quantum mechanical probabilities from the non-probabilistic parts of quantum mechanics and classical decision theory. We further argue that the Born probability does not make sense even as an additional probability rule in the many worlds theory. Our conclusion is that the many worlds theory fails to account for the probabilistic statements of standard (collapse) quantum mechanics.

  5. Paths of Target Seeking Missiles in Two Dimensions

    NASA Technical Reports Server (NTRS)

    Watkins, Charles E.

    1946-01-01

    Parameters that enter into equation of trajectory of a missile are discussed. Investigation is made of normal pursuit, of constant, proportional, and line--of-sight methods of navigation employing target seeker, and of deriving corresponding pursuit paths. Pursuit paths obtained under similar conditions for different methods are compared. Proportional navigation is concluded to be best method for using target seeker installed in missile.

  6. Probability Density Function for Waves Propagating in a Straight PEC Rough Wall Tunnel

    SciTech Connect

    Pao, H

    2004-11-08

    The probability density function for wave propagating in a straight perfect electrical conductor (PEC) rough wall tunnel is deduced from the mathematical models of the random electromagnetic fields. The field propagating in caves or tunnels is a complex-valued Gaussian random processing by the Central Limit Theorem. The probability density function for single modal field amplitude in such structure is Ricean. Since both expected value and standard deviation of this field depend only on radial position, the probability density function, which gives what is the power distribution, is a radially dependent function. The radio channel places fundamental limitations on the performance of wireless communication systems in tunnels and caves. The transmission path between the transmitter and receiver can vary from a simple direct line of sight to one that is severely obstructed by rough walls and corners. Unlike wired channels that are stationary and predictable, radio channels can be extremely random and difficult to analyze. In fact, modeling the radio channel has historically been one of the more challenging parts of any radio system design; this is often done using statistical methods. In this contribution, we present the most important statistic property, the field probability density function, of wave propagating in a straight PEC rough wall tunnel. This work only studies the simplest case--PEC boundary which is not the real world but the methods and conclusions developed herein are applicable to real world problems which the boundary is dielectric. The mechanisms behind electromagnetic wave propagation in caves or tunnels are diverse, but can generally be attributed to reflection, diffraction, and scattering. Because of the multiple reflections from rough walls, the electromagnetic waves travel along different paths of varying lengths. The interactions between these waves cause multipath fading at any location, and the strengths of the waves decrease as the distance

  7. Probability workshop to be better in probability topic

    NASA Astrophysics Data System (ADS)

    Asmat, Aszila; Ujang, Suriyati; Wahid, Sharifah Norhuda Syed

    2015-02-01

    The purpose of the present study was to examine whether statistics anxiety and attitudes towards probability topic among students in higher education level have an effect on their performance. 62 fourth semester science students were given statistics anxiety questionnaires about their perception towards probability topic. Result indicated that students' performance in probability topic is not related to anxiety level, which means that the higher level in statistics anxiety will not cause lower score in probability topic performance. The study also revealed that motivated students gained from probability workshop ensure that their performance in probability topic shows a positive improvement compared before the workshop. In addition there exists a significance difference in students' performance between genders with better achievement among female students compared to male students. Thus, more initiatives in learning programs with different teaching approaches is needed to provide useful information in improving student learning outcome in higher learning institution.

  8. Survival probability in patients with liver trauma.

    PubMed

    Buci, Skender; Kukeli, Agim

    2016-08-01

    Purpose - The purpose of this paper is to assess the survival probability among patients with liver trauma injury using the anatomical and psychological scores of conditions, characteristics and treatment modes. Design/methodology/approach - A logistic model is used to estimate 173 patients' survival probability. Data are taken from patient records. Only emergency room patients admitted to University Hospital of Trauma (former Military Hospital) in Tirana are included. Data are recorded anonymously, preserving the patients' privacy. Findings - When correctly predicted, the logistic models show that survival probability varies from 70.5 percent up to 95.4 percent. The degree of trauma injury, trauma with liver and other organs, total days the patient was hospitalized, and treatment method (conservative vs intervention) are statistically important in explaining survival probability. Practical implications - The study gives patients, their relatives and physicians ample and sound information they can use to predict survival chances, the best treatment and resource management. Originality/value - This study, which has not been done previously, explores survival probability, success probability for conservative and non-conservative treatment, and success probability for single vs multiple injuries from liver trauma. PMID:27477933

  9. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2016-04-01

    Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative

  10. Four paths of competition

    SciTech Connect

    Studness, C.M.

    1995-05-01

    The financial community`s focus on utility competition has been riveted on the proceedings now in progress at state regulatory commissions. The fear that something immediately damaging will come out of these proceedings seems to have diminished in recent months, and the stock market has reacted favorably. However, regulatory developments are only one of four paths leading to competition; the others are the marketplace, the legislatures, and the courts. Each could play a critical role in the emergence of competition.

  11. PATHS groundwater hydrologic model

    SciTech Connect

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  12. Spirit's Path to Bonneville

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Scientists created this overlay map by laying navigation and panoramic camera images taken from the surface of Mars on top of one of Spirit's descent images taken as the spacecraft descended to the martian surface. The map was created to help track the path that Spirit has traveled through sol 44 and to put into perspective the distance left to travel before reaching the edge of the large crater nicknamed 'Bonneville.'

    The area boxed in yellow contains the ground images that have been matched to and layered on top of the descent image. The yellow line shows the path that Spirit has traveled and the red dashed line shows the intended path for future sols. The blue circles highlight hollowed areas on the surface, such as Sleepy Hollow, near the lander, and Laguna Hollow, the sol 45 drive destination. Scientists use these hollowed areas - which can be seen in both the ground images and the descent image - to correctly match up the overlay.

    Field geologists on Earth create maps like this to assist them in tracking their observations.

  13. On the bound of first excursion probability

    NASA Technical Reports Server (NTRS)

    Yang, J. N.

    1969-01-01

    Method has been developed to improve the lower bound of the first excursion probability that can apply to the problem with either constant or time-dependent barriers. The method requires knowledge of the joint density function of the random process at two arbitrary instants.

  14. Propensity, Probability, and Quantum Theory

    NASA Astrophysics Data System (ADS)

    Ballentine, Leslie E.

    2016-08-01

    Quantum mechanics and probability theory share one peculiarity. Both have well established mathematical formalisms, yet both are subject to controversy about the meaning and interpretation of their basic concepts. Since probability plays a fundamental role in QM, the conceptual problems of one theory can affect the other. We first classify the interpretations of probability into three major classes: (a) inferential probability, (b) ensemble probability, and (c) propensity. Class (a) is the basis of inductive logic; (b) deals with the frequencies of events in repeatable experiments; (c) describes a form of causality that is weaker than determinism. An important, but neglected, paper by P. Humphreys demonstrated that propensity must differ mathematically, as well as conceptually, from probability, but he did not develop a theory of propensity. Such a theory is developed in this paper. Propensity theory shares many, but not all, of the axioms of probability theory. As a consequence, propensity supports the Law of Large Numbers from probability theory, but does not support Bayes theorem. Although there are particular problems within QM to which any of the classes of probability may be applied, it is argued that the intrinsic quantum probabilities (calculated from a state vector or density matrix) are most naturally interpreted as quantum propensities. This does not alter the familiar statistical interpretation of QM. But the interpretation of quantum states as representing knowledge is untenable. Examples show that a density matrix fails to represent knowledge.

  15. Path optimization with limited sensing ability

    NASA Astrophysics Data System (ADS)

    Kang, Sung Ha; Kim, Seong Jun; Zhou, Haomin

    2015-10-01

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducing its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.

  16. Path optimization with limited sensing ability

    SciTech Connect

    Kang, Sung Ha Kim, Seong Jun Zhou, Haomin

    2015-10-15

    We propose a computational strategy to find the optimal path for a mobile sensor with limited coverage to traverse a cluttered region. The goal is to find one of the shortest feasible paths to achieve the complete scan of the environment. We pose the problem in the level set framework, and first consider a related question of placing multiple stationary sensors to obtain the full surveillance of the environment. By connecting the stationary locations using the nearest neighbor strategy, we form the initial guess for the path planning problem of the mobile sensor. Then the path is optimized by reducing its length, via solving a system of ordinary differential equations (ODEs), while maintaining the complete scan of the environment. Furthermore, we use intermittent diffusion, which converts the ODEs into stochastic differential equations (SDEs), to find an optimal path whose length is globally minimal. To improve the computation efficiency, we introduce two techniques, one to remove redundant connecting points to reduce the dimension of the system, and the other to deal with the entangled path so the solution can escape the local traps. Numerical examples are shown to illustrate the effectiveness of the proposed method.

  17. Investigation of Flood Inundation Probability in Taiwan

    NASA Astrophysics Data System (ADS)

    Wang, Chia-Ho; Lai, Yen-Wei; Chang, Tsang-Jung

    2010-05-01

    Taiwan is located at a special point, which is in the path of typhoons from northeast Pacific Ocean. Taiwan is also situated in a tropical-subtropical transition zone. As a result, rainfall is abundant all the year round, especially in summer and autumn. For flood inundation analysis in Taiwan, there exist a lot of uncertainties in hydrological, hydraulic and land-surface topography characteristics, which can change flood inundation characteristics. According to the 7th work item of article 22 in Disaster Prevention and Protection Act in Taiwan, for preventing flood disaster being deteriorating, investigation analysis of disaster potentials, hazardous degree and situation simulation must be proceeded with scientific approaches. However, the flood potential analysis uses a deterministic approach to define flood inundation without considering data uncertainties. This research combines data uncertainty concept in flood inundation maps for showing flood probabilities in each grid. It can be an emergency evacuation basis as typhoons come and extremely torrential rain begin. The research selects Hebauyu watershed of Chiayi County as the demonstration area. Owing to uncertainties of data used, sensitivity analysis is first conducted by using Latin Hypercube sampling (LHS). LHS data sets are next input into an integrated numerical model, which is herein developed to assess flood inundation hazards in coastal lowlands, base on the extension of the 1-D river routing model and the 2-D inundation routing model. Finally, the probability of flood inundation simulation is calculated, and the flood inundation probability maps are obtained. Flood Inundation probability maps can be an alternative of the old flood potential maps for being a regard of building new hydraulic infrastructure in the future.

  18. The cumulative reaction probability as eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Manthe, Uwe; Miller, William H.

    1993-09-01

    It is shown that the cumulative reaction probability for a chemical reaction can be expressed (absolutely rigorously) as N(E)=∑kpk(E), where {pk} are the eigenvalues of a certain Hermitian matrix (or operator). The eigenvalues {pk} all lie between 0 and 1 and thus have the interpretation as probabilities, eigenreaction probabilities which may be thought of as the rigorous generalization of the transmission coefficients for the various states of the activated complex in transition state theory. The eigenreaction probabilities {pk} can be determined by diagonalizing a matrix that is directly available from the Hamiltonian matrix itself. It is also shown how a very efficient iterative method can be used to determine the eigenreaction probabilities for problems that are too large for a direct diagonalization to be possible. The number of iterations required is much smaller than that of previous methods, approximately the number of eigenreaction probabilities that are significantly different from zero. All of these new ideas are illustrated by application to three model problems—transmission through a one-dimensional (Eckart potential) barrier, the collinear H+H2→H2+H reaction, and the three-dimensional version of this reaction for total angular momentum J=0.

  19. Path-breaking schemes for nonequilibrium free energy calculations

    NASA Astrophysics Data System (ADS)

    Chelli, Riccardo; Gellini, Cristina; Pietraperzia, Giangaetano; Giovannelli, Edoardo; Cardini, Gianni

    2013-06-01

    We propose a path-breaking route to the enhancement of unidirectional nonequilibrium simulations for the calculation of free energy differences via Jarzynski's equality [C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997)], 10.1103/PhysRevLett.78.2690. One of the most important limitations of unidirectional nonequilibrium simulations is the amount of realizations necessary to reach suitable convergence of the work exponential average featuring the Jarzynski's relationship. In this respect, a significant improvement of the performances could be obtained by finding a way of stopping trajectories with negligible contribution to the work exponential average, before their normal end. This is achieved using path-breaking schemes which are essentially based on periodic checks of the work dissipated during the pulling trajectories. Such schemes can be based either on breaking trajectories whose dissipated work exceeds a given threshold or on breaking trajectories with a probability increasing with the dissipated work. In both cases, the computer time needed to carry out a series of nonequilibrium trajectories is reduced up to a factor ranging from 2 to more than 10, at least for the processes under consideration in the present study. The efficiency depends on several aspects, such as the type of process, the number of check-points along the pathway and the pulling rate as well. The method is illustrated through radically different processes, i.e., the helix-coil transition of deca-alanine and the pulling of the distance between two methane molecules in water solution.

  20. Considering New Paths for Success: An Examination of the Research and Methods on Urban School-University Partnerships Post-No Child Left Behind

    ERIC Educational Resources Information Center

    Flynn, Joseph E.; Hunt, Rebecca D.; Johnson, Laura Ruth; Wickman, Scott A.

    2014-01-01

    This article examines urban school-university partnership research after No Child Left Behind. Central to the review is an analysis in the trend of research methods utilized across studies. It was found that many studies are single-case studies or anecdotal. There are few quantitative, sustained qualitative, or mixed-methods studies represented in…

  1. Establishing a Scientific Basis for Optimizing Compositions, Process Paths and Fabrication Methods for Nanostructured Ferritic Alloys for Use in Advanced Fission Energy Systems

    SciTech Connect

    Odette, G Robert; Cunningham, Nicholas J., Wu, Yuan; Etienne, Auriane; Stergar, Erich; Yamamoto, Takuya

    2012-02-21

    The broad objective of this NEUP was to further develop a class of 12-15Cr ferritic alloys that are dispersion strengthened and made radiation tolerant by an ultrahigh density of Y-Ti-O nanofeatures (NFs) in the size range of less than 5 nm. We call these potentially transformable materials nanostructured ferritic alloys (NFAs). NFAs are typically processed by ball milling pre-alloyed rapidly solidified powders and yttria (Y2O3) powders. Proper milling effectively dissolves the Ti, Y and O solutes that precipitate as NFs during hot consolidation. The tasks in the present study included examining alternative processing paths, characterizing and optimizing the NFs and investigating solid state joining. Alternative processing paths involved rapid solidification by gas atomization of Fe, 14% Cr, 3% W, and 0.4% Ti powders that are also pre-alloyed with 0.2% Y (14YWT), where the compositions are in wt.%. The focus is on exploring the possibility of minimizing, or even eliminating, the milling time, as well as producing alloys with more homogeneous distributions of NFs and a more uniform, fine grain size. Three atomization environments were explored: Ar, Ar plus O (Ar/O) and He. The characterization of powders and alloys occurred through each processing step: powder production by gas atomization; powder milling; and powder annealing or hot consolidation by hot isostatic pressing (HIPing) or hot extrusion. The characterization studies of the materials described here include various combinations of: a) bulk chemistry; b) electron probe microanalysis (EPMA); c) atom probe tomography (APT); d) small angle neutron scattering (SANS); e) various types of scanning and transmission electron microscopy (SEM and TEM); and f) microhardness testing. The bulk chemistry measurements show that preliminary batches of gas-atomized powders could be produced within specified composition ranges. However, EPMA and TEM showed that the Y is heterogeneously distributed and phase separated, but

  2. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  3. Dynamic Task Assignment and Path Planning of Multi-AUV System Based on an Improved Self-Organizing Map and Velocity Synthesis Method in Three-Dimensional Underwater Workspace.

    PubMed

    Zhu, Daqi; Huang, Huan; Yang, S X

    2013-04-01

    For a 3-D underwater workspace with a variable ocean current, an integrated multiple autonomous underwater vehicle (AUV) dynamic task assignment and path planning algorithm is proposed by combing the improved self-organizing map (SOM) neural network and a novel velocity synthesis approach. The goal is to control a team of AUVs to reach all appointed target locations for only one time on the premise of workload balance and energy sufficiency while guaranteeing the least total and individual consumption in the presence of the variable ocean current. First, the SOM neuron network is developed to assign a team of AUVs to achieve multiple target locations in 3-D ocean environment. The working process involves special definition of the initial neural weights of the SOM network, the rule to select the winner, the computation of the neighborhood function, and the method to update weights. Then, the velocity synthesis approach is applied to plan the shortest path for each AUV to visit the corresponding target in a dynamic environment subject to the ocean current being variable and targets being movable. Lastly, to demonstrate the effectiveness of the proposed approach, simulation results are given in this paper. PMID:22949070

  4. Multiple Manifold Clustering Using Curvature Constrained Path

    PubMed Central

    Babaeian, Amir; Bayestehtashk, Alireza; Bandarabadi, Mojtaba

    2015-01-01

    The problem of multiple surface clustering is a challenging task, particularly when the surfaces intersect. Available methods such as Isomap fail to capture the true shape of the surface near by the intersection and result in incorrect clustering. The Isomap algorithm uses shortest path between points. The main draw back of the shortest path algorithm is due to the lack of curvature constrained where causes to have a path between points on different surfaces. In this paper we tackle this problem by imposing a curvature constraint to the shortest path algorithm used in Isomap. The algorithm chooses several landmark nodes at random and then checks whether there is a curvature constrained path between each landmark node and every other node in the neighborhood graph. We build a binary feature vector for each point where each entry represents the connectivity of that point to a particular landmark. Then the binary feature vectors could be used as a input of conventional clustering algorithm such as hierarchical clustering. We apply our method to simulated and some real datasets and show, it performs comparably to the best methods such as K-manifold and spectral multi-manifold clustering. PMID:26375819

  5. PROBABILITY SURVEYS, CONDITIONAL PROBABILITIES, AND ECOLOGICAL RISK ASSESSMENT

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as U.S. Environmental Protection Agency's (U.S. EPA) Environmental Monitoring and Asscssment Program EMAP) can be analyzed with a conditional probability analysis (CPA) to conduct quantitative probabi...

  6. Probability Surveys, Conditional Probability, and Ecological Risk Assessment

    EPA Science Inventory

    We show that probability-based environmental resource monitoring programs, such as the U.S. Environmental Protection Agency’s (U.S. EPA) Environmental Monitoring and Assessment Program, and conditional probability analysis can serve as a basis for estimating ecological risk over ...

  7. Optical tomography with discretized path integral.

    PubMed

    Yuan, Bingzhi; Tamaki, Toru; Kushida, Takahiro; Mukaigawa, Yasuhiro; Kubo, Hiroyuki; Raytchev, Bisser; Kaneda, Kazufumi

    2015-07-01

    We present a framework for optical tomography based on a path integral. Instead of directly solving the radiative transport equations, which have been widely used in optical tomography, we use a path integral that has been developed for rendering participating media based on the volume rendering equation in computer graphics. For a discretized two-dimensional layered grid, we develop an algorithm to estimate the extinction coefficients of each voxel with an interior point method. Numerical simulation results are shown to demonstrate that the proposed method works well. PMID:26839903

  8. Determination of drill paths for percutaneous cochlear access accounting for target positioning error

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael

    2007-03-01

    In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.

  9. UCAV path planning in the presence of radar-guided surface-to-air missile threats

    NASA Astrophysics Data System (ADS)

    Zeitz, Frederick H., III

    This dissertation addresses the problem of path planning for unmanned combat aerial vehicles (UCAVs) in the presence of radar-guided surface-to-air missiles (SAMs). The radars, collocated with SAM launch sites, operate within the structure of an Integrated Air Defense System (IADS) that permits communication and cooperation between individual radars. The problem is formulated in the framework of the interaction between three sub-systems: the aircraft, the IADS, and the missile. The main features of this integrated model are: The aircraft radar cross section (RCS) depends explicitly on both the aspect and bank angles; hence, the RCS and aircraft dynamics are coupled. The probabilistic nature of IADS tracking is accounted for; namely, the probability that the aircraft has been continuously tracked by the IADS depends on the aircraft RCS and range from the perspective of each radar within the IADS. Finally, the requirement to maintain tracking prior to missile launch and during missile flyout are also modeled. Based on this model, the problem of UCAV path planning is formulated as a minimax optimal control problem, with the aircraft bank angle serving as control. Necessary conditions of optimality for this minimax problem are derived. Based on these necessary conditions, properties of the optimal paths are derived. These properties are used to discretize the dynamic optimization problem into a finite-dimensional, nonlinear programming problem that can be solved numerically. Properties of the optimal paths are also used to initialize the numerical procedure. A homotopy method is proposed to solve the finite-dimensional, nonlinear programming problem, and a heuristic method is proposed to improve the discretization during the homotopy process. Based upon the properties of numerical solutions, a method is proposed for parameterizing and storing information for later recall in flight to permit rapid replanning in response to changing threats. Illustrative examples are

  10. Thermoalgebras and path integral

    NASA Astrophysics Data System (ADS)

    Khanna, F. C.; Malbouisson, A. P. C.; Malbouisson, J. M. C.; Santana, A. E.

    2009-09-01

    Using a representation for Lie groups closely associated with thermal problems, we derive the algebraic rules of the real-time formalism for thermal quantum field theories, the so-called thermo-field dynamics (TFD), including the tilde conjugation rules for interacting fields. These thermo-group representations provide a unified view of different approaches for finite-temperature quantum fields in terms of a symmetry group. On these grounds, a path integral formalism is constructed, using Bogoliubov transformations, for bosons, fermions and non-abelian gauge fields. The generalization of the results for quantum fields in (S1)d×R topology is addressed.

  11. JAVA PathFinder

    NASA Technical Reports Server (NTRS)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  12. Portage and Path Dependence*

    PubMed Central

    Bleakley, Hoyt; Lin, Jeffrey

    2012-01-01

    We examine portage sites in the U.S. South, Mid-Atlantic, and Midwest, including those on the fall line, a geomorphological feature in the southeastern U.S. marking the final rapids on rivers before the ocean. Historically, waterborne transport of goods required portage around the falls at these points, while some falls provided water power during early industrialization. These factors attracted commerce and manufacturing. Although these original advantages have long since been made obsolete, we document the continuing importance of these portage sites over time. We interpret these results as path dependence and contrast explanations based on sunk costs interacting with decreasing versus increasing returns to scale. PMID:23935217

  13. Portage and Path Dependence.

    PubMed

    Bleakley, Hoyt; Lin, Jeffrey

    2012-05-01

    We examine portage sites in the U.S. South, Mid-Atlantic, and Midwest, including those on the fall line, a geomorphological feature in the southeastern U.S. marking the final rapids on rivers before the ocean. Historically, waterborne transport of goods required portage around the falls at these points, while some falls provided water power during early industrialization. These factors attracted commerce and manufacturing. Although these original advantages have long since been made obsolete, we document the continuing importance of these portage sites over time. We interpret these results as path dependence and contrast explanations based on sunk costs interacting with decreasing versus increasing returns to scale. PMID:23935217

  14. The Probabilities of Conditionals Revisited

    ERIC Educational Resources Information Center

    Douven, Igor; Verbrugge, Sara

    2013-01-01

    According to what is now commonly referred to as "the Equation" in the literature on indicative conditionals, the probability of any indicative conditional equals the probability of its consequent of the conditional given the antecedent of the conditional. Philosophers widely agree in their assessment that the triviality arguments of…

  15. Minimizing the probable maximum flood

    SciTech Connect

    Woodbury, M.S.; Pansic, N. ); Eberlein, D.T. )

    1994-06-01

    This article examines Wisconsin Electric Power Company's efforts to determine an economical way to comply with Federal Energy Regulatory Commission requirements at two hydroelectric developments on the Michigamme River. Their efforts included refinement of the area's probable maximum flood model based, in part, on a newly developed probable maximum precipitation estimate.

  16. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  17. Probability of sea level rise

    SciTech Connect

    Titus, J.G.; Narayanan, V.K.

    1995-10-01

    The report develops probability-based projections that can be added to local tide-gage trends to estimate future sea level at particular locations. It uses the same models employed by previous assessments of sea level rise. The key coefficients in those models are based on subjective probability distributions supplied by a cross-section of climatologists, oceanographers, and glaciologists.

  18. Understanding the Influence Career Paths Have on Community and Technical College Chief Business Officers' Satisfaction with Their Position: A Mixed Method Investigation

    ERIC Educational Resources Information Center

    File, Carter L.

    2013-01-01

    This study was undertaken to understand whether a community or technical college chief business officer's career line influenced the lived experience of job satisfaction. This mixed method study was conducted in a two-phase approach using the Explanatory Design: Participant Selection Model variant. An initial quantitative survey was conducted from…

  19. Non-Gaussian Photon Probability Distribution

    NASA Astrophysics Data System (ADS)

    Solomon, Benjamin T.

    2010-01-01

    This paper investigates the axiom that the photon's probability distribution is a Gaussian distribution. The Airy disc empirical evidence shows that the best fit, if not exact, distribution is a modified Gamma mΓ distribution (whose parameters are α = r, βr/√u ) in the plane orthogonal to the motion of the photon. This modified Gamma distribution is then used to reconstruct the probability distributions along the hypotenuse from the pinhole, arc from the pinhole, and a line parallel to photon motion. This reconstruction shows that the photon's probability distribution is not a Gaussian function. However, under certain conditions, the distribution can appear to be Normal, thereby accounting for the success of quantum mechanics. This modified Gamma distribution changes with the shape of objects around it and thus explains how the observer alters the observation. This property therefore places additional constraints to quantum entanglement experiments. This paper shows that photon interaction is a multi-phenomena effect consisting of the probability to interact Pi, the probabilistic function and the ability to interact Ai, the electromagnetic function. Splitting the probability function Pi from the electromagnetic function Ai enables the investigation of the photon behavior from a purely probabilistic Pi perspective. The Probabilistic Interaction Hypothesis is proposed as a consistent method for handling the two different phenomena, the probability function Pi and the ability to interact Ai, thus redefining radiation shielding, stealth or cloaking, and invisibility as different effects of a single phenomenon Pi of the photon probability distribution. Sub wavelength photon behavior is successfully modeled as a multi-phenomena behavior. The Probabilistic Interaction Hypothesis provides a good fit to Otoshi's (1972) microwave shielding, Schurig et al. (2006) microwave cloaking, and Oulton et al. (2008) sub wavelength confinement; thereby providing a strong case that

  20. Diagnosis of multilayer clouds using photon path length distributions

    NASA Astrophysics Data System (ADS)

    Li, Siwei; Min, Qilong

    2010-10-01

    Photon path length distribution is sensitive to 3-D cloud structures. A detection method for multilayer clouds has been developed, by utilizing the information of photon path length distribution. The photon path length method estimates photon path length information from the low level, single-layer cloud structure that can be accurately observed by a millimeter-wave cloud radar (MMCR) combined with a micropulse lidar (MPL). As multiple scattering within the cloud layers and between layers would substantially enhance the photon path length, the multilayer clouds can be diagnosed by evaluating the estimated photon path information against observed photon path length information from a co-located rotating shadowband spectrometer (RSS). The measurements of MMCR-MPL and RSS at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site have been processed for the year 2000. Cases studies illustrate the consistency between MMCR-MPL detection and the photon path length method under most conditions. However, the photon path length method detected some multilayer clouds that were classified by the MMCR-MPL as single-layer clouds. From 1 year statistics at the ARM SGP site, about 27.7% of single-layer clouds detected by the MMCR-MPL with solar zenith angle less than 70° and optical depth greater than 10 could be multilayer clouds. It suggests that a substantial portion of single-layer clouds detected by the MMCR-MPL could also be influenced by some "missed" clouds or by the 3-D effects of clouds.