Sample records for path probability method

  1. Minimal entropy probability paths between genome families.

    PubMed

    Ahlbrandt, Calvin; Benson, Gary; Casey, William

    2004-05-01

    We develop a metric for probability distributions with applications to biological sequence analysis. Our distance metric is obtained by minimizing a functional defined on the class of paths over probability measures on N categories. The underlying mathematical theory is connected to a constrained problem in the calculus of variations. The solution presented is a numerical solution, which approximates the true solution in a set of cases called rich paths where none of the components of the path is zero. The functional to be minimized is motivated by entropy considerations, reflecting the idea that nature might efficiently carry out mutations of genome sequences in such a way that the increase in entropy involved in transformation is as small as possible. We characterize sequences by frequency profiles or probability vectors, in the case of DNA where N is 4 and the components of the probability vector are the frequency of occurrence of each of the bases A, C, G and T. Given two probability vectors a and b, we define a distance function based as the infimum of path integrals of the entropy function H( p) over all admissible paths p(t), 0 < or = t< or =1, with p(t) a probability vector such that p(0)=a and p(1)=b. If the probability paths p(t) are parameterized as y(s) in terms of arc length s and the optimal path is smooth with arc length L, then smooth and "rich" optimal probability paths may be numerically estimated by a hybrid method of iterating Newton's method on solutions of a two point boundary value problem, with unknown distance L between the abscissas, for the Euler-Lagrange equations resulting from a multiplier rule for the constrained optimization problem together with linear regression to improve the arc length estimate L. Matlab code for these numerical methods is provided which works only for "rich" optimal probability vectors. These methods motivate a definition of an elementary distance function which is easier and faster to calculate, works on non-rich vectors, does not involve variational theory and does not involve differential equations, but is a better approximation of the minimal entropy path distance than the distance //b-a//(2). We compute minimal entropy distance matrices for examples of DNA myostatin genes and amino-acid sequences across several species. Output tree dendograms for our minimal entropy metric are compared with dendograms based on BLAST and BLAST identity scores.

  2. Blocking performance approximation in flexi-grid networks

    NASA Astrophysics Data System (ADS)

    Ge, Fei; Tan, Liansheng

    2016-12-01

    The blocking probability to the path requests is an important issue in flexible bandwidth optical communications. In this paper, we propose a blocking probability approximation method of path requests in flexi-grid networks. It models the bundled neighboring carrier allocation with a group of birth-death processes and provides a theoretical analysis to the blocking probability under variable bandwidth traffic. The numerical results show the effect of traffic parameters to the blocking probability of path requests. We use the first fit algorithm in network nodes to allocate neighboring carriers to path requests in simulations, and verify approximation results.

  3. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  4. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    PubMed Central

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  5. Option volatility and the acceleration Lagrangian

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Cao, Yang

    2014-01-01

    This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.

  6. Bragg peak prediction from quantitative proton computed tomography using different path estimates

    PubMed Central

    Wang, Dongxu; Mackie, T Rockwell

    2015-01-01

    This paper characterizes the performance of the straight-line path (SLP) and cubic spline path (CSP) as path estimates used in reconstruction of proton computed tomography (pCT). The GEANT4 Monte Carlo simulation toolkit is employed to simulate the imaging phantom and proton projections. SLP, CSP and the most-probable path (MPP) are constructed based on the entrance and exit information of each proton. The physical deviations of SLP, CSP and MPP from the real path are calculated. Using a conditional proton path probability map, the relative probability of SLP, CSP and MPP are calculated and compared. The depth dose and Bragg peak are predicted on the pCT images reconstructed using SLP, CSP, and MPP and compared with the simulation result. The root-mean-square physical deviations and the cumulative distribution of the physical deviations show that the performance of CSP is comparable to MPP while SLP is slightly inferior. About 90% of the SLP pixels and 99% of the CSP pixels lie in the 99% relative probability envelope of the MPP. Even at an imaging dose of ~0.1 mGy the proton Bragg peak for a given incoming energy can be predicted on the pCT image reconstructed using SLP, CSP, or MPP with 1 mm accuracy. This study shows that SLP and CSP, like MPP, are adequate path estimates for pCT reconstruction, and therefore can be chosen as the path estimation method for pCT reconstruction, which can aid the treatment planning and range prediction of proton radiation therapy. PMID:21212472

  7. On Convergent Probability of a Random Walk

    ERIC Educational Resources Information Center

    Lee, Y.-F.; Ching, W.-K.

    2006-01-01

    This note introduces an interesting random walk on a straight path with cards of random numbers. The method of recurrent relations is used to obtain the convergent probability of the random walk with different initial positions.

  8. Reliability computation using fault tree analysis

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.

    1971-01-01

    A method is presented for calculating event probabilities from an arbitrary fault tree. The method includes an analytical derivation of the system equation and is not a simulation program. The method can handle systems that incorporate standby redundancy and it uses conditional probabilities for computing fault trees where the same basic failure appears in more than one fault path.

  9. Master equations and the theory of stochastic path integrals

    NASA Astrophysics Data System (ADS)

    Weber, Markus F.; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a ‘generating functional’, which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a ‘forward’ and a ‘backward’ path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  10. Master equations and the theory of stochastic path integrals.

    PubMed

    Weber, Markus F; Frey, Erwin

    2017-04-01

    This review provides a pedagogic and self-contained introduction to master equations and to their representation by path integrals. Since the 1930s, master equations have served as a fundamental tool to understand the role of fluctuations in complex biological, chemical, and physical systems. Despite their simple appearance, analyses of master equations most often rely on low-noise approximations such as the Kramers-Moyal or the system size expansion, or require ad-hoc closure schemes for the derivation of low-order moment equations. We focus on numerical and analytical methods going beyond the low-noise limit and provide a unified framework for the study of master equations. After deriving the forward and backward master equations from the Chapman-Kolmogorov equation, we show how the two master equations can be cast into either of four linear partial differential equations (PDEs). Three of these PDEs are discussed in detail. The first PDE governs the time evolution of a generalized probability generating function whose basis depends on the stochastic process under consideration. Spectral methods, WKB approximations, and a variational approach have been proposed for the analysis of the PDE. The second PDE is novel and is obeyed by a distribution that is marginalized over an initial state. It proves useful for the computation of mean extinction times. The third PDE describes the time evolution of a 'generating functional', which generalizes the so-called Poisson representation. Subsequently, the solutions of the PDEs are expressed in terms of two path integrals: a 'forward' and a 'backward' path integral. Combined with inverse transformations, one obtains two distinct path integral representations of the conditional probability distribution solving the master equations. We exemplify both path integrals in analysing elementary chemical reactions. Moreover, we show how a well-known path integral representation of averaged observables can be recovered from them. Upon expanding the forward and the backward path integrals around stationary paths, we then discuss and extend a recent method for the computation of rare event probabilities. Besides, we also derive path integral representations for processes with continuous state spaces whose forward and backward master equations admit Kramers-Moyal expansions. A truncation of the backward expansion at the level of a diffusion approximation recovers a classic path integral representation of the (backward) Fokker-Planck equation. One can rewrite this path integral in terms of an Onsager-Machlup function and, for purely diffusive Brownian motion, it simplifies to the path integral of Wiener. To make this review accessible to a broad community, we have used the language of probability theory rather than quantum (field) theory and do not assume any knowledge of the latter. The probabilistic structures underpinning various technical concepts, such as coherent states, the Doi-shift, and normal-ordered observables, are thereby made explicit.

  11. Path probability of stochastic motion: A functional approach

    NASA Astrophysics Data System (ADS)

    Hattori, Masayuki; Abe, Sumiyoshi

    2016-06-01

    The path probability of a particle undergoing stochastic motion is studied by the use of functional technique, and the general formula is derived for the path probability distribution functional. The probability of finding paths inside a tube/band, the center of which is stipulated by a given path, is analytically evaluated in a way analogous to continuous measurements in quantum mechanics. Then, the formalism developed here is applied to the stochastic dynamics of stock price in finance.

  12. Dual stage potential field method for robotic path planning

    NASA Astrophysics Data System (ADS)

    Singh, Pradyumna Kumar; Parida, Pramod Kumar

    2018-04-01

    Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.

  13. A Discrete Probability Function Method for the Equation of Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A discrete probability function (DPF) method for the equation of radiative transfer is derived. The DPF is defined as the integral of the probability density function (PDF) over a discrete interval. The derivation allows the evaluation of the PDF of intensities leaving desired radiation paths including turbulence-radiation interactions without the use of computer intensive stochastic methods. The DPF method has a distinct advantage over conventional PDF methods since the creation of a partial differential equation from the equation of transfer is avoided. Further, convergence of all moments of intensity is guaranteed at the basic level of simulation unlike the stochastic method where the number of realizations for convergence of higher order moments increases rapidly. The DPF method is described for a representative path with approximately integral-length scale-sized spatial discretization. The results show good agreement with measurements in a propylene/air flame except for the effects of intermittency resulting from highly correlated realizations. The method can be extended to the treatment of spatial correlations as described in the Appendix. However, information regarding spatial correlations in turbulent flames is needed prior to the execution of this extension.

  14. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  15. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  16. Mobile robot dynamic path planning based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  17. Wave propagation in a random medium

    NASA Technical Reports Server (NTRS)

    Lee, R. W.; Harp, J. C.

    1969-01-01

    A simple technique is used to derive statistical characterizations of the perturbations imposed upon a wave (plane, spherical or beamed) propagating through a random medium. The method is essentially physical rather than mathematical, and is probably equivalent to the Rytov method. The limitations of the method are discussed in some detail; in general they are restrictive only for optical paths longer than a few hundred meters, and for paths at the lower microwave frequencies. Situations treated include arbitrary path geometries, finite transmitting and receiving apertures, and anisotropic media. Results include, in addition to the usual statistical quantities, time-lagged functions, mixed functions involving amplitude and phase fluctuations, angle-of-arrival covariances, frequency covariances, and other higher-order quantities.

  18. Evaluating the risk of industrial espionage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bott, T.F.

    1998-12-31

    A methodology for estimating the relative probabilities of different compromise paths for protected information by insider and visitor intelligence collectors has been developed based on an event-tree analysis of the intelligence collection operation. The analyst identifies target information and ultimate users who might attempt to gain that information. The analyst then uses an event tree to develop a set of compromise paths. Probability models are developed for each of the compromise paths that user parameters based on expert judgment or historical data on security violations. The resulting probability estimates indicate the relative likelihood of different compromise paths and provide anmore » input for security resource allocation. Application of the methodology is demonstrated using a national security example. A set of compromise paths and probability models specifically addressing this example espionage problem are developed. The probability models for hard-copy information compromise paths are quantified as an illustration of the results using parametric values representative of historical data available in secure facilities, supplemented where necessary by expert judgment.« less

  19. Prediction of slant path rain attenuation statistics at various locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1977-01-01

    The paper describes a method for predicting slant path attenuation statistics at arbitrary locations for variable frequencies and path elevation angles. The method involves the use of median reflectivity factor-height profiles measured with radar as well as the use of long-term point rain rate data and assumed or measured drop size distributions. The attenuation coefficient due to cloud liquid water in the presence of rain is also considered. Absolute probability fade distributions are compared for eight cases: Maryland (15 GHz), Texas (30 GHz), Slough, England (19 and 37 GHz), Fayetteville, North Carolina (13 and 18 GHz), and Cambridge, Massachusetts (13 and 18 GHz).

  20. Multiple-path model of spectral reflectance of a dyed fabric.

    PubMed

    Rogers, Geoffrey; Dalloz, Nicolas; Fournel, Thierry; Hebert, Mathieu

    2017-05-01

    Experimental results are presented of the spectral reflectance of a dyed fabric as analyzed by a multiple-path model of reflection. The multiple-path model provides simple analytic expressions for reflection and transmission of turbid media by applying the Beer-Lambert law to each path through the medium and summing over all paths, each path weighted by its probability. The path-length probability is determined by a random-walk analysis. The experimental results presented here show excellent agreement with predictions made by the model.

  1. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  2. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  3. Girsanov reweighting for path ensembles and Markov state models

    NASA Astrophysics Data System (ADS)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  4. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  5. Incorporating target registration error into robotic bone milling

    NASA Astrophysics Data System (ADS)

    Siebold, Michael A.; Dillon, Neal P.; Webster, Robert J.; Fitzpatrick, J. Michael

    2015-03-01

    Robots have been shown to be useful in assisting surgeons in a variety of bone drilling and milling procedures. Examples include commercial systems for joint repair or replacement surgeries, with in vitro feasibility recently shown for mastoidectomy. Typically, the robot is guided along a path planned on a CT image that has been registered to the physical anatomy in the operating room, which is in turn registered to the robot. The registrations often take advantage of the high accuracy of fiducial registration, but, because no real-world registration is perfect, the drill guided by the robot will inevitably deviate from its planned path. The extent of the deviation can vary from point to point along the path because of the spatial variation of target registration error. The allowable deviation can also vary spatially based on the necessary safety margin between the drill tip and various nearby anatomical structures along the path. Knowledge of the expected spatial distribution of registration error can be obtained from theoretical models or experimental measurements and used to modify the planned path. The objective of such modifications is to achieve desired probabilities for sparing specified structures. This approach has previously been studied for drilling straight holes but has not yet been generalized to milling procedures, such as mastoidectomy, in which cavities of more general shapes must be created. In this work, we present a general method for altering any path to achieve specified probabilities for any spatial arrangement of structures to be protected. We validate the method via numerical simulations in the context of mastoidectomy.

  6. Incorporating Target Registration Error Into Robotic Bone Milling

    PubMed Central

    Siebold, Michael A.; Dillon, Neal P.; Webster, Robert J.; Fitzpatrick, J. Michael

    2015-01-01

    Robots have been shown to be useful in assisting surgeons in a variety of bone drilling and milling procedures. Examples include commercial systems for joint repair or replacement surgeries, with in vitro feasibility recently shown for mastoidectomy. Typically, the robot is guided along a path planned on a CT image that has been registered to the physical anatomy in the operating room, which is in turn registered to the robot. The registrations often take advantage of the high accuracy of fiducial registration, but, because no real-world registration is perfect, the drill guided by the robot will inevitably deviate from its planned path. The extent of the deviation can vary from point to point along the path because of the spatial variation of target registration error. The allowable deviation can also vary spatially based on the necessary safety margin between the drill tip and various nearby anatomical structures along the path. Knowledge of the expected spatial distribution of registration error can be obtained from theoretical models or experimental measurements and used to modify the planned path. The objective of such modifications is to achieve desired probabilities for sparing specified structures. This approach has previously been studied for drilling straight holes but has not yet been generalized to milling procedures, such as mastoidectomy, in which cavities of more general shapes must be created. In this work, we present a general method for altering any path to achieve specified probabilities for any spatial arrangement of structures to be protected. We validate the method via numerical simulations in the context of mastoidectomy. PMID:26692630

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu

    In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less

  8. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  9. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  10. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  11. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  12. Path lumping: An efficient algorithm to identify metastable path channels for conformational dynamics of multi-body systems

    NASA Astrophysics Data System (ADS)

    Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui

    2017-07-01

    Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.

  13. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    NASA Astrophysics Data System (ADS)

    Fronita, Mona; Gernowo, Rahmat; Gunawan, Vincencius

    2018-02-01

    Traveling Salesman Problem (TSP) is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour's to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  14. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, Albert F., E-mail: wagner@anl.gov; Dawes, Richard; Continetti, Robert E.

    The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO{sub 2} on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experimentmore » occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.« less

  16. Continuity equation for probability as a requirement of inference over paths

    NASA Astrophysics Data System (ADS)

    González, Diego; Díaz, Daniela; Davis, Sergio

    2016-09-01

    Local conservation of probability, expressed as the continuity equation, is a central feature of non-equilibrium Statistical Mechanics. In the existing literature, the continuity equation is always motivated by heuristic arguments with no derivation from first principles. In this work we show that the continuity equation is a logical consequence of the laws of probability and the application of the formalism of inference over paths for dynamical systems. That is, the simple postulate that a system moves continuously through time following paths implies the continuity equation. The translation between the language of dynamical paths to the usual representation in terms of probability densities of states is performed by means of an identity derived from Bayes' theorem. The formalism presented here is valid independently of the nature of the system studied: it is applicable to physical systems and also to more abstract dynamics such as financial indicators, population dynamics in ecology among others.

  17. Cost efficient environmental survey paths for detecting continuous tracer discharges

    NASA Astrophysics Data System (ADS)

    Alendal, G.

    2017-07-01

    Designing monitoring programs for detecting potential tracer discharges from unknown locations is challenging. The high variability of the environment may camouflage the anticipated anisotropic signal from a discharge, and there are a number of discharge scenarios. Monitoring operations may also be costly, constraining the number of measurements taken. By assuming that a discharge is active, and a prior belief on the most likely seep location, a method that uses Bayes' theorem combined with discharge footprint predictions is used to update the probability map. Measurement locations with highest reduction in the overall probability of a discharge to be active can be identified. The relative cost between reallocating and measurements can be taken into account. Three different strategies are suggested to enable cost efficient paths for autonomous vessels.

  18. The description of two-photon Rabi oscillations in the path integral approach

    NASA Astrophysics Data System (ADS)

    Biryukov, A. A.; Degtyareva, Ya. V.; Shleenkov, M. A.

    2018-04-01

    The probability of quantum transitions of a molecule between its states under the action of an electromagnetic field is represented as an integral over trajectories from a real alternating functional. A method is proposed for computing the integral using recurrence relations. The method is attached to describe the two-photon Rabi oscillations.

  19. A transformed path integral approach for solution of the Fokker-Planck equation

    NASA Astrophysics Data System (ADS)

    Subramaniam, Gnana M.; Vedula, Prakash

    2017-10-01

    A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.

  20. Energy-optimal path planning in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Subramani, Deepak N.; Haley, Patrick J.; Lermusiaux, Pierre F. J.

    2017-05-01

    We integrate data-driven ocean modeling with the stochastic Dynamically Orthogonal (DO) level-set optimization methodology to compute and study energy-optimal paths, speeds, and headings for ocean vehicles in the Middle-Atlantic Bight (MAB) region. We hindcast the energy-optimal paths from among exact time-optimal paths for the period 28 August 2006 to 9 September 2006. To do so, we first obtain a data-assimilative multiscale reanalysis, combining ocean observations with implicit two-way nested multiresolution primitive-equation simulations of the tidal-to-mesoscale dynamics in the region. Second, we solve the reduced-order stochastic DO level-set partial differential equations (PDEs) to compute the joint probability of minimum arrival time, vehicle-speed time series, and total energy utilized. Third, for each arrival time, we select the vehicle-speed time series that minimize the total energy utilization from the marginal probability of vehicle-speed and total energy. The corresponding energy-optimal path and headings are obtained through the exact particle-backtracking equation. Theoretically, the present methodology is PDE-based and provides fundamental energy-optimal predictions without heuristics. Computationally, it is 3-4 orders of magnitude faster than direct Monte Carlo methods. For the missions considered, we analyze the effects of the regional tidal currents, strong wind events, coastal jets, shelfbreak front, and other local circulations on the energy-optimal paths. Results showcase the opportunities for vehicles that intelligently utilize the ocean environment to minimize energy usage, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  1. A graph-based system for network-vulnerability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, L.P.; Phillips, C.

    1998-06-01

    This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks,more » broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.« less

  2. Dynamic phase transitions of the Blume-Emery-Griffiths model under an oscillating external magnetic field by the path probability method

    NASA Astrophysics Data System (ADS)

    Ertaş, Mehmet; Keskin, Mustafa

    2015-03-01

    By using the path probability method (PPM) with point distribution, we study the dynamic phase transitions (DPTs) in the Blume-Emery-Griffiths (BEG) model under an oscillating external magnetic field. The phases in the model are obtained by solving the dynamic equations for the average order parameters and a disordered phase, ordered phase and four mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature dynamic transitions as well as to obtain the DPT temperatures. The dynamic phase diagrams are presented in three different planes in which exhibit the dynamic tricritical point, double critical end point, critical end point, quadrupole point, triple point as well as the reentrant behavior, strongly depending on the values of the system parameters. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that were obtained within the Glauber-type stochastic dynamics based on the mean-field theory.

  3. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  4. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  5. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  6. Brief communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, Pierrick; Jaboyedoff, Michel; Cloutier, Catherine; Crosta, Giovanni B.; Lévy, Sébastien

    2016-04-01

    When calculating the risk of railway or road users of being killed by a natural hazard, one has to calculate a temporal spatial probability, i.e. the probability of a vehicle being in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case such of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle is discussed.

  7. Brief Communication: On direct impact probability of landslides on vehicles

    NASA Astrophysics Data System (ADS)

    Nicolet, P.; Jaboyedoff, M.; Cloutier, C.; Crosta, G. B.; Lévy, S.

    2015-12-01

    When calculating the risk of railway or road users to be killed by a natural hazard, one has to calculate a "spatio-temporal probability", i.e. the probability for a vehicle to be in the path of the falling mass when the mass falls, or the expected number of affected vehicles in case of an event. To calculate this, different methods are used in the literature, and, most of the time, they consider only the dimensions of the falling mass or the dimensions of the vehicles. Some authors do however consider both dimensions at the same time, and the use of their approach is recommended. Finally, a method considering an impact on the front of the vehicle in addition is discussed.

  8. Langevin Dynamics, Large Deviations and Instantons for the Quasi-Geostrophic Model and Two-Dimensional Euler Equations

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2014-09-01

    We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.

  9. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    PubMed

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of streamline tractography algorithms or the assumption of a noise distribution. Moreover, the BootGraph can be applied to common DTI data sets without further modifications and shows a high repeatability. Thus, it is very well suited for longitudinal studies and meta-studies based on DTI. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Path integration mediated systematic search: a Bayesian model.

    PubMed

    Vickerstaff, Robert J; Merkle, Tobias

    2012-08-21

    The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Robust path planning for flexible needle insertion using Markov decision processes.

    PubMed

    Tan, Xiaoyu; Yu, Pengqian; Lim, Kah-Bin; Chui, Chee-Kong

    2018-05-11

    Flexible needle has the potential to accurately navigate to a treatment region in the least invasive manner. We propose a new planning method using Markov decision processes (MDPs) for flexible needle navigation that can perform robust path planning and steering under the circumstance of complex tissue-needle interactions. This method enhances the robustness of flexible needle steering from three different perspectives. First, the method considers the problem caused by soft tissue deformation. The method then resolves the common needle penetration failure caused by patterns of targets, while the last solution addresses the uncertainty issues in flexible needle motion due to complex and unpredictable tissue-needle interaction. Computer simulation and phantom experimental results show that the proposed method can perform robust planning and generate a secure control policy for flexible needle steering. Compared with a traditional method using MDPs, the proposed method achieves higher accuracy and probability of success in avoiding obstacles under complicated and uncertain tissue-needle interactions. Future work will involve experiment with biological tissue in vivo. The proposed robust path planning method can securely steer flexible needle within soft phantom tissues and achieve high adaptability in computer simulation.

  12. Hedged Monte-Carlo: low variance derivative pricing with objective probabilities

    NASA Astrophysics Data System (ADS)

    Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan

    2001-01-01

    We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.

  13. Theoretical/experimental comparison of deep tunneling decay of quasi-bound H(D)OCO to H(D) + CO₂.

    PubMed

    Wagner, Albert F; Dawes, Richard; Continetti, Robert E; Guo, Hua

    2014-08-07

    The measured H(D)OCO survival fractions of the photoelectron-photofragment coincidence experiments by the Continetti group are qualitatively reproduced by tunneling calculations to H(D) + CO2 on several recent ab initio potential energy surfaces for the HOCO system. The tunneling calculations involve effective one-dimensional barriers based on steepest descent paths computed on each potential energy surface. The resulting tunneling probabilities are converted into H(D)OCO survival fractions using a model developed by the Continetti group in which every oscillation of the H(D)-OCO stretch provides an opportunity to tunnel. Four different potential energy surfaces are examined with the best qualitative agreement with experiment occurring for the PIP-NN surface based on UCCSD(T)-F12a/AVTZ electronic structure calculations and also a partial surface constructed for this study based on CASPT2/AVDZ electronic structure calculations. These two surfaces differ in barrier height by 1.6 kcal/mol but when matched at the saddle point have an almost identical shape along their reaction paths. The PIP surface is a less accurate fit to a smaller ab initio data set than that used for PIP-NN and its computed survival fractions are somewhat inferior to PIP-NN. The LTSH potential energy surface is the oldest surface examined and is qualitatively incompatible with experiment. This surface also has a small discontinuity that is easily repaired. On each surface, four different approximate tunneling methods are compared but only the small curvature tunneling method and the improved semiclassical transition state method produce useful results on all four surfaces. The results of these two methods are generally comparable and in qualitative agreement with experiment on the PIP-NN and CASPT2 surfaces. The original semiclassical transition state theory method produces qualitatively incorrect tunneling probabilities on all surfaces except the PIP. The Eckart tunneling method uses the least amount of information about the reaction path and produces too high a tunneling probability on PIP-NN surface, leading to survival fractions that peak at half their measured values.

  14. Analysis and elimination of a bias in targeted molecular dynamics simulations of conformational transitions: application to calmodulin.

    PubMed

    Ovchinnikov, Victor; Karplus, Martin

    2012-07-26

    The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.

  15. Weak measurements measure probability amplitudes (and very little else)

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2016-04-01

    Conventional quantum mechanics describes a pre- and post-selected system in terms of virtual (Feynman) paths via which the final state can be reached. In the absence of probabilities, a weak measurement (WM) determines the probability amplitudes for the paths involved. The weak values (WV) can be identified with these amplitudes, or their linear combinations. This allows us to explain the ;unusual; properties of the WV, and avoid the ;paradoxes; often associated with the WM.

  16. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  17. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  18. Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States

    NASA Astrophysics Data System (ADS)

    de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M.

    2018-03-01

    Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.

  19. Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States.

    PubMed

    de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M

    2018-03-23

    Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.

  20. Effective distances for epidemics spreading on complex networks.

    PubMed

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  1. Effective distances for epidemics spreading on complex networks

    NASA Astrophysics Data System (ADS)

    Iannelli, Flavio; Koher, Andreas; Brockmann, Dirk; Hövel, Philipp; Sokolov, Igor M.

    2017-01-01

    We show that the recently introduced logarithmic metrics used to predict disease arrival times on complex networks are approximations of more general network-based measures derived from random walks theory. Using the daily air-traffic transportation data we perform numerical experiments to compare the infection arrival time with this alternative metric that is obtained by accounting for multiple walks instead of only the most probable path. The comparison with direct simulations reveals a higher correlation compared to the shortest-path approach used previously. In addition our method allows to connect fundamental observables in epidemic spreading with the cumulant-generating function of the hitting time for a Markov chain. Our results provides a general and computationally efficient approach using only algebraic methods.

  2. Optimizing Retransmission Threshold in Wireless Sensor Networks

    PubMed Central

    Bi, Ran; Li, Yingshu; Tan, Guozhen; Sun, Liang

    2016-01-01

    The retransmission threshold in wireless sensor networks is critical to the latency of data delivery in the networks. However, existing works on data transmission in sensor networks did not consider the optimization of the retransmission threshold, and they simply set the same retransmission threshold for all sensor nodes in advance. The method did not take link quality and delay requirement into account, which decreases the probability of a packet passing its delivery path within a given deadline. This paper investigates the problem of finding optimal retransmission thresholds for relay nodes along a delivery path in a sensor network. The object of optimizing retransmission thresholds is to maximize the summation of the probability of the packet being successfully delivered to the next relay node or destination node in time. A dynamic programming-based distributed algorithm for finding optimal retransmission thresholds for relay nodes along a delivery path in the sensor network is proposed. The time complexity is OnΔ·max1≤i≤n{ui}, where ui is the given upper bound of the retransmission threshold of sensor node i in a given delivery path, n is the length of the delivery path and Δ is the given upper bound of the transmission delay of the delivery path. If Δ is greater than the polynomial, to reduce the time complexity, a linear programming-based (1+pmin)-approximation algorithm is proposed. Furthermore, when the ranges of the upper and lower bounds of retransmission thresholds are big enough, a Lagrange multiplier-based distributed O(1)-approximation algorithm with time complexity O(1) is proposed. Experimental results show that the proposed algorithms have better performance. PMID:27171092

  3. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential.

    PubMed

    Edwards, James P; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  4. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    NASA Astrophysics Data System (ADS)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  5. Path analyses of cross-sectional and longitudinal data suggest that variability in natural communities of blood-associated parasites is derived from host characteristics and not interspecific interactions.

    PubMed

    Cohen, Carmit; Einav, Monica; Hawlena, Hadas

    2015-08-19

    The parasite composition of wild host individuals often impacts their behavior and physiology, and the transmission dynamics of pathogenic species thereby determines disease risk in natural communities. Yet, the determinants of parasite composition in natural communities are still obscure. In particular, three fundamental questions remain open: (1) what are the relative roles of host and environmental characteristics compared with direct interactions between parasites in determining the community composition of parasites? (2) do these determinants affect parasites belonging to the same guild and those belonging to different guilds in similar manners? and (3) can cross-sectional and longitudinal analyses work interchangeably in detecting community determinants? Our study was designed to answer these three questions in a natural community of rodents and their fleas, ticks, and two vector-borne bacteria. We sampled a natural population of Gerbillus andersoni rodents and their blood-associated parasites on two occasions. By combining path analysis and model selection approaches, we then explored multiple direct and indirect paths that connect (i) the environmental and host-related characteristics to the infection probability of a host by each of the four parasite species, and (ii) the infection probabilities of the four species by each other. Our results suggest that the majority of paths shaping the blood-associated communities are indirect, mostly determined by host characteristics and not by interspecific interactions or environmental conditions. The exact effects of host characteristics on infection probability by a given parasite depend on its life history and on the method of sampling, in which the cross-sectional and longitudinal methods are complementary. Despite the awareness of the need of ecological investigations into natural host-vector-parasite communities in light of the emergence and re-emergence of vector-borne diseases, we lack sampling methods that are both practical and reliable. Here we illustrated how comprehensive patterns can be revealed from observational data by applying path analysis and model selection approaches and combining cross-sectional and longitudinal analyses. By employing this combined approach on blood-associated parasites, we were able to distinguish between direct and indirect effects and to predict the causal relationships between host-related characteristics and the parasite composition over time and space. We concluded that direct interactions within the community play only a minor role in determining community composition relative to host characteristics and the life history of the community members.

  6. What is the correct cost functional for variational data assimilation?

    NASA Astrophysics Data System (ADS)

    Bröcker, Jochen

    2018-03-01

    Variational approaches to data assimilation, and weakly constrained four dimensional variation (WC-4DVar) in particular, are important in the geosciences but also in other communities (often under different names). The cost functions and the resulting optimal trajectories may have a probabilistic interpretation, for instance by linking data assimilation with maximum aposteriori (MAP) estimation. This is possible in particular if the unknown trajectory is modelled as the solution of a stochastic differential equation (SDE), as is increasingly the case in weather forecasting and climate modelling. In this situation, the MAP estimator (or "most probable path" of the SDE) is obtained by minimising the Onsager-Machlup functional. Although this fact is well known, there seems to be some confusion in the literature, with the energy (or "least squares") functional sometimes been claimed to yield the most probable path. The first aim of this paper is to address this confusion and show that the energy functional does not, in general, provide the most probable path. The second aim is to discuss the implications in practice. Although the mentioned results pertain to stochastic models in continuous time, they do have consequences in practice where SDE's are approximated by discrete time schemes. It turns out that using an approximation to the SDE and calculating its most probable path does not necessarily yield a good approximation to the most probable path of the SDE proper. This suggest that even in discrete time, a version of the Onsager-Machlup functional should be used, rather than the energy functional, at least if the solution is to be interpreted as a MAP estimator.

  7. ECG fiducial point extraction using switching Kalman filter.

    PubMed

    Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian

    2018-04-01

    In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Vapor nucleation paths in lyophobic nanopores.

    PubMed

    Tinti, Antonio; Giacomello, Alberto; Casciola, Carlo Massimo

    2018-04-19

    In recent years, technologies revolving around the use of lyophobic nanopores gained considerable attention in both fundamental and applied research. Owing to the enormous internal surface area, heterogeneous lyophobic systems (HLS), constituted by a nanoporous lyophobic material and a non-wetting liquid, are promising candidates for the efficient storage or dissipation of mechanical energy. These diverse applications both rely on the forced intrusion and extrusion of the non-wetting liquid inside the pores; the behavior of HLS for storage or dissipation depends on the hysteresis between these two processes, which, in turn, are determined by the microscopic details of the system. It is easy to understand that molecular simulations provide an unmatched tool for understanding phenomena at these scales. In this contribution we use advanced atomistic simulation techniques in order to study the nucleation of vapor bubbles inside lyophobic mesopores. The use of the string method in collective variables allows us to overcome the computational challenges associated with the activated nature of the phenomenon, rendering a detailed picture of nucleation in confinement. In particular, this rare event method efficiently searches for the most probable nucleation path(s) in otherwise intractable, high-dimensional free-energy landscapes. Results reveal the existence of several independent nucleation paths associated with different free-energy barriers. In particular, there is a family of asymmetric transition paths, in which a bubble forms at one of the walls; the other family involves the formation of axisymmetric bubbles with an annulus shape. The computed free-energy profiles reveal that the asymmetric path is significantly more probable than the symmetric one, while the exact position where the asymmetric bubble forms is less relevant for the free energetics of the process. A comparison of the atomistic results with continuum models is also presented, showing how, for simple liquids in mesoporous materials of characteristic size of ca. 4nm, the nanoscale effects reported for smaller pores have a minor role. The atomistic estimates for the nucleation free-energy barrier are in qualitative accord with those that can be obtained using a macroscopic, capillary-based nucleation theory.

  9. Wavelength assignment algorithm considering the state of neighborhood links for OBS networks

    NASA Astrophysics Data System (ADS)

    Tanaka, Yu; Hirota, Yusuke; Tode, Hideki; Murakami, Koso

    2005-10-01

    Recently, Optical WDM technology is introduced into backbone networks. On the other hand, as the future optical switching scheme, Optical Burst Switching (OBS) systems become a realistic solution. OBS systems do not consider buffering in intermediate nodes. Thus, it is an important issue to avoid overlapping wavelength reservation between partially interfered paths. To solve this problem, so far, the wavelength assignment scheme which has priority management tables has been proposed. This method achieves the reduction of burst blocking probability. However, this priority management table requires huge memory space. In this paper, we propose a wavelength assignment algorithm that reduces both the number of priority management tables and burst blocking probability. To reduce priority management tables, we allocate and manage them for each link. To reduce burst blocking probability, our method announces information about the change of their priorities to intermediate nodes. We evaluate its performance in terms of the burst blocking probability and the reduction rate of priority management tables.

  10. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  11. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  12. Spatial capture-recapture models for jointly estimating population density and landscape connectivity

    USGS Publications Warehouse

    Royle, J. Andrew; Chandler, Richard B.; Gazenski, Kimberly D.; Graves, Tabitha A.

    2013-01-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture–recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on “ecological distance,” i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture–recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture–recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  13. Spatial capture--recapture models for jointly estimating population density and landscape connectivity.

    PubMed

    Royle, J Andrew; Chandler, Richard B; Gazenski, Kimberly D; Graves, Tabitha A

    2013-02-01

    Population size and landscape connectivity are key determinants of population viability, yet no methods exist for simultaneously estimating density and connectivity parameters. Recently developed spatial capture--recapture (SCR) models provide a framework for estimating density of animal populations but thus far have not been used to study connectivity. Rather, all applications of SCR models have used encounter probability models based on the Euclidean distance between traps and animal activity centers, which implies that home ranges are stationary, symmetric, and unaffected by landscape structure. In this paper we devise encounter probability models based on "ecological distance," i.e., the least-cost path between traps and activity centers, which is a function of both Euclidean distance and animal movement behavior in resistant landscapes. We integrate least-cost path models into a likelihood-based estimation scheme for spatial capture-recapture models in order to estimate population density and parameters of the least-cost encounter probability model. Therefore, it is possible to make explicit inferences about animal density, distribution, and landscape connectivity as it relates to animal movement from standard capture-recapture data. Furthermore, a simulation study demonstrated that ignoring landscape connectivity can result in negatively biased density estimators under the naive SCR model.

  14. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  15. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept

    PubMed Central

    Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-01-01

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme. PMID:29186850

  16. Optimal Power Allocation Strategy in a Joint Bistatic Radar and Communication System Based on Low Probability of Intercept.

    PubMed

    Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang

    2017-11-25

    In this paper, we investigate a low probability of intercept (LPI)-based optimal power allocation strategy for a joint bistatic radar and communication system, which is composed of a dedicated transmitter, a radar receiver, and a communication receiver. The joint system is capable of fulfilling the requirements of both radar and communications simultaneously. First, assuming that the signal-to-noise ratio (SNR) corresponding to the target surveillance path is much weaker than that corresponding to the line of sight path at radar receiver, the analytically closed-form expression for the probability of false alarm is calculated, whereas the closed-form expression for the probability of detection is not analytically tractable and is approximated due to the fact that the received signals are not zero-mean Gaussian under target presence hypothesis. Then, an LPI-based optimal power allocation strategy is presented to minimize the total transmission power for information signal and radar waveform, which is constrained by a specified information rate for the communication receiver and the desired probabilities of detection and false alarm for the radar receiver. The well-known bisection search method is employed to solve the resulting constrained optimization problem. Finally, numerical simulations are provided to reveal the effects of several system parameters on the power allocation results. It is also demonstrated that the LPI performance of the joint bistatic radar and communication system can be markedly improved by utilizing the proposed scheme.

  17. Tackling higher derivative ghosts with the Euclidean path integral

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanini, Michele; Department of Physics, Syracuse University, Syracuse, New York 13244; Trodden, Mark

    2011-05-15

    An alternative to the effective field theory approach to treat ghosts in higher derivative theories is to attempt to integrate them out via the Euclidean path integral formalism. It has been suggested that this method could provide a consistent framework within which we might tolerate the ghost degrees of freedom that plague, among other theories, the higher derivative gravity models that have been proposed to explain cosmic acceleration. We consider the extension of this idea to treating a class of terms with order six derivatives, and find that for a general term the Euclidean path integral approach works in themore » most trivial background, Minkowski. Moreover we see that even in de Sitter background, despite some difficulties, it is possible to define a probability distribution for tensorial perturbations of the metric.« less

  18. Identification of influential nodes in complex networks: Method from spreading probability viewpoint

    NASA Astrophysics Data System (ADS)

    Bao, Zhong-Kui; Ma, Chuang; Xiang, Bing-Bing; Zhang, Hai-Feng

    2017-02-01

    The problem of identifying influential nodes in complex networks has attracted much attention owing to its wide applications, including how to maximize the information diffusion, boost product promotion in a viral marketing campaign, prevent a large scale epidemic and so on. From spreading viewpoint, the probability of one node propagating its information to one other node is closely related to the shortest distance between them, the number of shortest paths and the transmission rate. However, it is difficult to obtain the values of transmission rates for different cases, to overcome such a difficulty, we use the reciprocal of average degree to approximate the transmission rate. Then a semi-local centrality index is proposed to incorporate the shortest distance, the number of shortest paths and the reciprocal of average degree simultaneously. By implementing simulations in real networks as well as synthetic networks, we verify that our proposed centrality can outperform well-known centralities, such as degree centrality, betweenness centrality, closeness centrality, k-shell centrality, and nonbacktracking centrality. In particular, our findings indicate that the performance of our method is the most significant when the transmission rate nears to the epidemic threshold, which is the most meaningful region for the identification of influential nodes.

  19. A graph-based network-vulnerability analysis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, L.P.; Phillips, C.; Gaylor, T.

    1998-05-03

    This paper presents a graph based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example themore » class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level of effort for the attacker, various graph algorithms such as shortest path algorithms can identify the attack paths with the highest probability of success.« less

  20. A graph-based network-vulnerability analysis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, L.P.; Phillips, C.; Gaylor, T.

    1998-01-01

    This report presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the classmore » of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.« less

  1. Exact and Approximate Probabilistic Symbolic Execution

    NASA Technical Reports Server (NTRS)

    Luckow, Kasper; Pasareanu, Corina S.; Dwyer, Matthew B.; Filieri, Antonio; Visser, Willem

    2014-01-01

    Probabilistic software analysis seeks to quantify the likelihood of reaching a target event under uncertain environments. Recent approaches compute probabilities of execution paths using symbolic execution, but do not support nondeterminism. Nondeterminism arises naturally when no suitable probabilistic model can capture a program behavior, e.g., for multithreading or distributed systems. In this work, we propose a technique, based on symbolic execution, to synthesize schedulers that resolve nondeterminism to maximize the probability of reaching a target event. To scale to large systems, we also introduce approximate algorithms to search for good schedulers, speeding up established random sampling and reinforcement learning results through the quantification of path probabilities based on symbolic execution. We implemented the techniques in Symbolic PathFinder and evaluated them on nondeterministic Java programs. We show that our algorithms significantly improve upon a state-of- the-art statistical model checking algorithm, originally developed for Markov Decision Processes.

  2. The effect of on-line position correction on the dose distribution in focal radiotherapy for bladder cancer

    PubMed Central

    van Rooijen, Dominique C; van de Kamer, Jeroen B; Pool, René; Hulshof, Maarten CCM; Koning, Caro CE; Bel, Arjan

    2009-01-01

    Background The purpose of this study was to determine the dosimetric effect of on-line position correction for bladder tumor irradiation and to find methods to predict and handle this effect. Methods For 25 patients with unifocal bladder cancer intensity modulated radiotherapy (IMRT) with 5 beams was planned. The requirement for each plan was that 99% of the target volume received 95% of the prescribed dose. Tumor displacements from -2.0 cm to 2.0 cm in each dimension were simulated, using 0.5 cm increments, resulting in 729 simulations per patient. We assumed that on-line correction for the tumor was applied perfectly. We determined the correlation between the change in D99% and the change in path length, which is defined here as the distance from the skin to the isocenter for each beam. In addition the margin needed to avoid underdosage was determined and the probability that an underdosage occurs in a real treatment was calculated. Results Adjustments for tumor displacement with perfect on-line position correction resulted in an altered dose distribution. The altered fraction dose to the target varied from 91.9% to 100.4% of the prescribed dose. The mean D99% (± SD) was 95.8% ± 1.0%. There was a modest linear correlation between the difference in D99% and the change in path length of the beams after correction (R2 = 0.590). The median probability that a systematic underdosage occurs in a real treatment was 0.23% (range: 0 - 24.5%). A margin of 2 mm reduced that probability to < 0.001% in all patients. Conclusion On-line position correction does result in an altered target coverage, due to changes in average path length after position correction. An extra margin can be added to prevent underdosage. PMID:19775479

  3. Multipath Very-Simplified Estimate of Adversary Sequence Interruption v. 2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snell, Mark K.

    2017-10-10

    MP VEASI is a training tool that models physical protection systems for fixed sites using Adversary Sequence Diagrams (ASDs) and then uses the ASD to find most-vulnerable adversary paths through the ASD. The identified paths have the lowest Probability of Interruption among all the paths through the ASD.

  4. Adaptive Dynamics, Control, and Extinction in Networked Populations

    DTIC Science & Technology

    2015-07-09

    network geometries. From the pre-history of paths that go extinct, a density function is created from the prehistory of these paths, and a clear local...density plots of Fig. 3b. Using the IAMM to compute the most probable path and comparing it to the prehistory of extinction events on stochastic networks

  5. Analysis of switch and examine combining with post-examining selection in cognitive radio

    NASA Astrophysics Data System (ADS)

    Agarwal, Rupali; Srivastava, Neelam; Katiyar, Himanshu

    2018-06-01

    To perform spectrum sensing in fading environment is one of the most challenging tasks for a CR system. Diversity combining schemes are used to combat the effect of fading and hence detection probability of CR gets improved. Among many diversity combining techniques, switched diversity offers one of the lowest complexity solutions. The receiver embedded with switched diversity looks for an acceptable diversity path (having signal to noise ratio (SNR) above the required threshold) to receive the data. In conventional switch and examine combining (SEC) scheme, when no acceptable path is found after all the paths are examined, the receiver randomly chooses an unacceptable path. Switch and examine combining with post-examining selection (SECp) is a modified version of conventional SEC. In SECp, the conventional SEC scheme is altered in a way that it selects the best path when no acceptable path is found after all paths have been examined. In this paper, formula for probability of detection ( ?) is derived using SECp and SEC diversity combining technique over Rayleigh fading channel. Also the performance of SECp is compared with SEC and no diversity case. Performance comparison is done with the help of SNR vs. ? and complementary receiver operating characteristic curves.

  6. Path integrals and large deviations in stochastic hybrid systems.

    PubMed

    Bressloff, Paul C; Newby, Jay M

    2014-04-01

    We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

  7. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  8. Observations of a probable change in the solar radius between 1715 and 1979

    NASA Technical Reports Server (NTRS)

    Dunham, D. W.; Sofia, S.; Fiala, A. D.; Muller, P. M.; Herald, D.

    1980-01-01

    A decrease in the solar radius is determined using the technique of Dunham and Dunham (1973), in which timed observations are made just inside the path edges. When the method is applied to the solar eclipses of 1715, 1976, and 1979, the solar radius for 1715 is 0.34 + or - 0.2 arc second larger than the recent values, with no significant change between 1976 and 1979. The duration of totality is examined as a function of distance from the edges of the path. Corrections to the radius of the sun derived from observations of the 1976 and 1979 eclipses by the International Occultation Timing Association are also presented.

  9. Counterfactuality of ‘counterfactual’ communication

    NASA Astrophysics Data System (ADS)

    Vaidman, L.

    2015-11-01

    The counterfactuality of the recently proposed protocols for direct quantum communication is analyzed. It is argued that the protocols can be counterfactual only for one value of the transmitted bit. The protocols achieve a reduced probability of detection of the particle in the transmission channel by increasing the number of paths in the channel. However, this probability is not lower than the probability of detecting a particle actually passing through such a multi-path channel, which was found to be surprisingly small. The relation between security and counterfactuality of the protocols is discussed. An analysis of counterfactuality of the protocols in the framework of the Bohmian interpretation is performed.

  10. Retrievals of atmospheric columnar carbon dioxide and methane from GOSAT observations with photon path-length probability density function (PPDF) method

    NASA Astrophysics Data System (ADS)

    Bril, A.; Oshchepkov, S.; Yokota, T.; Yoshida, Y.; Morino, I.; Uchino, O.; Belikov, D. A.; Maksyutov, S. S.

    2014-12-01

    We retrieved the column-averaged dry air mole fraction of atmospheric carbon dioxide (XCO2) and methane (XCH4) from the radiance spectra measured by Greenhouse gases Observing SATellite (GOSAT) for 48 months of the satellite operation from June 2009. Recent version of the Photon path-length Probability Density Function (PPDF)-based algorithm was used to estimate XCO2 and optical path modifications in terms of PPDF parameters. We also present results of numerical simulations for over-land observations and "sharp edge" tests for sun-glint mode to discuss the algorithm accuracy under conditions of strong optical path modification. For the methane abundance retrieved from 1.67-µm-absorption band we applied optical path correction based on PPDF parameters from 1.6-µm carbon dioxide (CO2) absorption band. Similarly to CO2-proxy technique, this correction assumes identical light path modifications in 1.67-µm and 1.6-µm bands. However, proxy approach needs pre-defined XCO2 values to compute XCH4, whilst the PPDF-based approach does not use prior assumptions on CO2 concentrations.Post-processing data correction for XCO2 and XCH4 over land observations was performed using regression matrix based on multivariate analysis of variance (MANOVA). The MANOVA statistics was applied to the GOSAT retrievals using reference collocated measurements of Total Carbon Column Observing Network (TCCON). The regression matrix was constructed using the parameters that were found to correlate with GOSAT-TCCON discrepancies: PPDF parameters α and ρ, that are mainly responsible for shortening and lengthening of the optical path due to atmospheric light scattering; solar and satellite zenith angles; surface pressure; surface albedo in three GOSAT short wave infrared (SWIR) bands. Application of the post-correction generally improves statistical characteristics of the GOSAT-TCCON correlation diagrams for individual stations as well as for aggregated data.In addition to the analysis of the observations over 12 TCCON stations we estimated temporal and spatial trends (interannual XCO2 and XCH4 variations, seasonal cycles, latitudinal gradients) and compared them with modeled results as well as with similar estimates from other GOSAT retrievals.

  11. Sum over Histories Representation for Kinetic Sensitivity Analysis: How Chemical Pathways Change When Reaction Rate Coefficients Are Varied

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Shirong; Davis, Michael J.; Skodje, Rex T.

    2015-11-12

    The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less

  12. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  13. Multi-Sensor Integration to Map Odor Distribution for the Detection of Chemical Sources.

    PubMed

    Gao, Xiang; Acar, Levent

    2016-07-04

    This paper addresses the problem of mapping odor distribution derived from a chemical source using multi-sensor integration and reasoning system design. Odor localization is the problem of finding the source of an odor or other volatile chemical. Most localization methods require a mobile vehicle to follow an odor plume along its entire path, which is time consuming and may be especially difficult in a cluttered environment. To solve both of the above challenges, this paper proposes a novel algorithm that combines data from odor and anemometer sensors, and combine sensors' data at different positions. Initially, a multi-sensor integration method, together with the path of airflow was used to map the pattern of odor particle movement. Then, more sensors are introduced at specific regions to determine the probable location of the odor source. Finally, the results of odor source location simulation and a real experiment are presented.

  14. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems.

    PubMed

    Branduardi, Davide; Faraldo-Gómez, José D

    2013-09-10

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β -D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string.

  15. String method for calculation of minimum free-energy paths in Cartesian space in freely-tumbling systems

    PubMed Central

    Branduardi, Davide; Faraldo-Gómez, José D.

    2014-01-01

    The string method is a molecular-simulation technique that aims to calculate the minimum free-energy path of a chemical reaction or conformational transition, in the space of a pre-defined set of reaction coordinates that is typically highly dimensional. Any descriptor may be used as a reaction coordinate, but arguably the Cartesian coordinates of the atoms involved are the most unprejudiced and intuitive choice. Cartesian coordinates, however, present a non-trivial problem, in that they are not invariant to rigid-body molecular rotations and translations, which ideally ought to be unrestricted in the simulations. To overcome this difficulty, we reformulate the framework of the string method to integrate an on-the-fly structural-alignment algorithm. This approach, referred to as SOMA (String method with Optimal Molecular Alignment), enables the use of Cartesian reaction coordinates in freely tumbling molecular systems. In addition, this scheme permits the dissection of the free-energy change along the most probable path into individual atomic contributions, thus revealing the dominant mechanism of the simulated process. This detailed analysis also provides a physically-meaningful criterion to coarse-grain the representation of the path. To demonstrate the accuracy of the method we analyze the isomerization of the alanine dipeptide in vacuum and the chair-to-inverted-chair transition of β-D mannose in explicit water. Notwithstanding the simplicity of these systems, the SOMA approach reveals novel insights into the atomic mechanism of these isomerizations. In both cases, we find that the dynamics and the energetics of these processes are controlled by interactions involving only a handful of atoms in each molecule. Consistent with this result, we show that a coarse-grained SOMA calculation defined in terms of these subsets of atoms yields nearidentical minimum free-energy paths and committor distributions to those obtained via a highly-dimensional string. PMID:24729762

  16. Design and Evaluation of a Dynamic Programming Flight Routing Algorithm Using the Convective Weather Avoidance Model

    NASA Technical Reports Server (NTRS)

    Ng, Hok K.; Grabbe, Shon; Mukherjee, Avijit

    2010-01-01

    The optimization of traffic flows in congested airspace with varying convective weather is a challenging problem. One approach is to generate shortest routes between origins and destinations while meeting airspace capacity constraint in the presence of uncertainties, such as weather and airspace demand. This study focuses on development of an optimal flight path search algorithm that optimizes national airspace system throughput and efficiency in the presence of uncertainties. The algorithm is based on dynamic programming and utilizes the predicted probability that an aircraft will deviate around convective weather. It is shown that the running time of the algorithm increases linearly with the total number of links between all stages. The optimal routes minimize a combination of fuel cost and expected cost of route deviation due to convective weather. They are considered as alternatives to the set of coded departure routes which are predefined by FAA to reroute pre-departure flights around weather or air traffic constraints. A formula, which calculates predicted probability of deviation from a given flight path, is also derived. The predicted probability of deviation is calculated for all path candidates. Routes with the best probability are selected as optimal. The predicted probability of deviation serves as a computable measure of reliability in pre-departure rerouting. The algorithm can also be extended to automatically adjust its design parameters to satisfy the desired level of reliability.

  17. Emergence and stability of intermediate open vesicles in disk-to-vesicle transitions.

    PubMed

    Li, Jianfeng; Zhang, Hongdong; Qiu, Feng; Shi, An-Chang

    2013-07-01

    The transition between two basic structures, a disk and an enclosed vesicle, of a finite membrane is studied by examining the minimum energy path (MEP) connecting these two states. The MEP is constructed using the string method applied to continuum elastic membrane models. The results reveal that, besides the commonly observed disk and vesicle, open vesicles (bowl-shaped vesicles or vesicles with a pore) can become stable or metastable shapes. The emergence, stability, and probability distribution of these open vesicles are analyzed. It is demonstrated that open vesicles can be stabilized by higher-order elastic energies. The estimated probability distribution of the different structures is in good agreement with available experiments.

  18. UCAV path planning in the presence of radar-guided surface-to-air missile threats

    NASA Astrophysics Data System (ADS)

    Zeitz, Frederick H., III

    This dissertation addresses the problem of path planning for unmanned combat aerial vehicles (UCAVs) in the presence of radar-guided surface-to-air missiles (SAMs). The radars, collocated with SAM launch sites, operate within the structure of an Integrated Air Defense System (IADS) that permits communication and cooperation between individual radars. The problem is formulated in the framework of the interaction between three sub-systems: the aircraft, the IADS, and the missile. The main features of this integrated model are: The aircraft radar cross section (RCS) depends explicitly on both the aspect and bank angles; hence, the RCS and aircraft dynamics are coupled. The probabilistic nature of IADS tracking is accounted for; namely, the probability that the aircraft has been continuously tracked by the IADS depends on the aircraft RCS and range from the perspective of each radar within the IADS. Finally, the requirement to maintain tracking prior to missile launch and during missile flyout are also modeled. Based on this model, the problem of UCAV path planning is formulated as a minimax optimal control problem, with the aircraft bank angle serving as control. Necessary conditions of optimality for this minimax problem are derived. Based on these necessary conditions, properties of the optimal paths are derived. These properties are used to discretize the dynamic optimization problem into a finite-dimensional, nonlinear programming problem that can be solved numerically. Properties of the optimal paths are also used to initialize the numerical procedure. A homotopy method is proposed to solve the finite-dimensional, nonlinear programming problem, and a heuristic method is proposed to improve the discretization during the homotopy process. Based upon the properties of numerical solutions, a method is proposed for parameterizing and storing information for later recall in flight to permit rapid replanning in response to changing threats. Illustrative examples are presented that confirm the standard flying tactics of "denying range, aspect, and aim," by yielding flight paths that "weave" to avoid long exposures of aspects with large RCS.

  19. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    NASA Astrophysics Data System (ADS)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  20. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  1. The most likely voltage path and large deviations approximations for integrate-and-fire neurons.

    PubMed

    Paninski, Liam

    2006-08-01

    We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.

  2. Comparison of an Ultrasonic Phased Array Evaluation with Destructive Analysis of a Documented Leak Path in a Nozzle Removed from Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinson, Anthony D.; Crawford, Susan L.; MacFarlan, Paul J.

    2012-09-24

    Non-destructive and destructive testing methods were employed to evaluate a documented boric acid leakage path through an Alloy 600 control rod drive mechanism (CRDM) penetration from the North Anna Unit 2 reactor pressure vessel head that was removed from service in 2002. A previous ultrasonic in-service-inspection (ISI) conducted by industry prior to the head removal, identified a probable leakage path in Nozzle 63 located in the interference fit between the penetration tube and the vessel head. In this current examination, Nozzle 63 was examined using phased array (PA) ultrasonic testing with a 5.0-MHz, eight-element annular array; immersion data were acquiredmore » from the nozzle inner diameter (ID) surface. A variety of focal laws were employed to evaluate the signal responses from the interference fit region. These responses were compared to responses obtained from a mockup specimen that was used to determine detection limits and characterization capabilities for wastage and boric acid presence in the interference fit region. Nozzle 63 was destructively examined after the completion of the ultrasonic nondestructive evaluation (NDE) to visually assess the leak paths. These destructive and nondestructive results compared favorably« less

  3. Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.

    PubMed

    Lin, Lanny; Goodrich, Michael A

    2014-12-01

    During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.

  4. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  5. Open Quantum Random Walks on the Half-Line: The Karlin-McGregor Formula, Path Counting and Foster's Theorem

    NASA Astrophysics Data System (ADS)

    Jacq, Thomas S.; Lardizabal, Carlos F.

    2017-11-01

    In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.

  6. Diagnosability of Stochastic Chemical Kinetic Systems: A Discrete Event Systems Approach (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    USA. E -mail: thorsley@u.washington.edu. This research is partially supported by the 2006 AFOSR MURI award “High Confidence Design for Distributed...occurrence of the finite sample path ω. These distributions are defined recursively to be π0(x) := π0(x), πωσ(x ′) := ∑ x∈X πω(x)r(x ′,σ | x) e −r(x ′,σ|x... e −rxτ . (2) This probability is this probability that the arrival time of the first event is greater than τ . For finite sample paths with strings

  7. Theory of flapping flight

    NASA Technical Reports Server (NTRS)

    Lippisch, Alexander

    1925-01-01

    Before attempting to construct a human-powered aircraft, the aviator will first try to post himself theoretically on the possible method of operating the flapping wings. This report will present a graphic and mathematical method, which renders it possible to determine the power required, so far as it can be done on the basis of the wing dimensions. We will first consider the form of the flight path through the air. The simplest form is probably the curve of ordinary wave motion. After finding the flight curve, we must next determine the change in the angle of attack while passing through the different phases of the wave.

  8. Probability evolution method for exit location distribution

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  9. Multiple transmitter performance with appropriate amplitude modulation for free-space optical communication.

    PubMed

    Tellez, Jason A; Schmidt, Jason D

    2011-08-20

    The propagation of a free-space optical communications signal through atmospheric turbulence experiences random fluctuations in intensity, including signal fades, which negatively impact the performance of the communications link. The gamma-gamma probability density function is commonly used to model the scintillation of a single beam. One proposed method to reduce the occurrence of scintillation-induced fades at the receiver plane involves the use of multiple beams propagating through independent paths, resulting in a sum of independent gamma-gamma random variables. Recently an analytical model for the probability distribution of irradiance from the sum of multiple independent beams was developed. Because truly independent beams are practically impossible to create, we present here a more general but approximate model for the distribution of beams traveling through partially correlated paths. This model compares favorably with wave-optics simulations and highlights the reduced scintillation as the number of transmitted beams is increased. Additionally, a pulse-position modulation scheme is used to reduce the impact of signal fades when they occur. Analytical and simulated results showed significantly improved performance when compared to fixed threshold on/off keying. © 2011 Optical Society of America

  10. Electron emission produced by photointeractions in a slab target

    NASA Technical Reports Server (NTRS)

    Thinger, B. E.; Dayton, J. A., Jr.

    1973-01-01

    The current density and energy spectrum of escaping electrons generated in a uniform plane slab target which is being irradiated by the gamma flux field of a nuclear reactor are calculated by using experimental gamma energy transfer coefficients, electron range and energy relations, and escape probability computations. The probability of escape and the average path length of escaping electrons are derived for an isotropic distribution of monoenergetic photons. The method of estimating the flux and energy distribution of electrons emerging from the surface is outlined, and a sample calculation is made for a 0.33-cm-thick tungsten target located next to the core of a nuclear reactor. The results are to be used as a guide in electron beam synthesis of reactor experiments.

  11. Stochastic DT-MRI connectivity mapping on the GPU.

    PubMed

    McGraw, Tim; Nadar, Mariappan

    2007-01-01

    We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware.

  12. Markov State Models of gene regulatory networks.

    PubMed

    Chu, Brian K; Tse, Margaret J; Sato, Royce R; Read, Elizabeth L

    2017-02-06

    Gene regulatory networks with dynamics characterized by multiple stable states underlie cell fate-decisions. Quantitative models that can link molecular-level knowledge of gene regulation to a global understanding of network dynamics have the potential to guide cell-reprogramming strategies. Networks are often modeled by the stochastic Chemical Master Equation, but methods for systematic identification of key properties of the global dynamics are currently lacking. The method identifies the number, phenotypes, and lifetimes of long-lived states for a set of common gene regulatory network models. Application of transition path theory to the constructed Markov State Model decomposes global dynamics into a set of dominant transition paths and associated relative probabilities for stochastic state-switching. In this proof-of-concept study, we found that the Markov State Model provides a general framework for analyzing and visualizing stochastic multistability and state-transitions in gene networks. Our results suggest that this framework-adopted from the field of atomistic Molecular Dynamics-can be a useful tool for quantitative Systems Biology at the network scale.

  13. Nonequilibrium umbrella sampling in spaces of many order parameters

    NASA Astrophysics Data System (ADS)

    Dickson, Alex; Warmflash, Aryeh; Dinner, Aaron R.

    2009-02-01

    We recently introduced an umbrella sampling method for obtaining nonequilibrium steady-state probability distributions projected onto an arbitrary number of coordinates that characterize a system (order parameters) [A. Warmflash, P. Bhimalapuram, and A. R. Dinner, J. Chem. Phys. 127, 154112 (2007)]. Here, we show how our algorithm can be combined with the image update procedure from the finite-temperature string method for reversible processes [E. Vanden-Eijnden and M. Venturoli, "Revisiting the finite temperature string method for calculation of reaction tubes and free energies," J. Chem. Phys. (in press)] to enable restricted sampling of a nonequilibrium steady state in the vicinity of a path in a many-dimensional space of order parameters. For the study of transitions between stable states, the adapted algorithm results in improved scaling with the number of order parameters and the ability to progressively refine the regions of enforced sampling. We demonstrate the algorithm by applying it to a two-dimensional model of driven Brownian motion and a coarse-grained (Ising) model for nucleation under shear. It is found that the choice of order parameters can significantly affect the convergence of the simulation; local magnetization variables other than those used previously for sampling transition paths in Ising systems are needed to ensure that the reactive flux is primarily contained within a tube in the space of order parameters. The relation of this method to other algorithms that sample the statistics of path ensembles is discussed.

  14. Do parent–child acculturation gaps affect early adolescent Latino alcohol use? A study of the probability and extent of use

    PubMed Central

    2013-01-01

    The literature has been mixed regarding how parent–child relationships are affected by the acculturation process and how this process relates to alcohol use among Latino youth. The mixed results may be due to, at least, two factors: First, staggered migration in which one or both parents arrive to the new country and then send for the children may lead to faster acculturation in parents than in children for some families. Second, acculturation may have different effects depending on which aspects of alcohol use are being examined. This study addresses the first factor by testing for a curvilinear trend in the acculturation-alcohol use relationship and the second by modeling past year alcohol use as a zero inflated negative binomial distribution. Additionally, this study examined the unique and mediation effects of parent–child acculturation discrepancies (gap), mother involvement in children’s schooling, father involvement in children’s schooling, and effective parenting on youth alcohol use during the last 12 months, measured as the probability of using and the extent of use. Direct paths from parent–child acculturation discrepancy to alcohol use, and mediated paths through mother involvement, father involvement, and effective parenting were also tested. Only father involvement fully mediated the path from parent–child acculturation discrepancies to the probability of alcohol use. None of the variables examined mediated the path from parent–child acculturation discrepancies to the extent of alcohol use. Effective parenting was unrelated to acculturation discrepancies; however, it maintained a significant direct effect on the probability of youth alcohol use and the extent of use after controlling for mother and father involvement. Implications for prevention strategies are discussed. PMID:23347822

  15. About Schrödinger Equation on Fractals Curves Imbedding in R 3

    NASA Astrophysics Data System (ADS)

    Golmankhaneh, Alireza Khalili; Golmankhaneh, Ali Khalili; Baleanu, Dumitru

    2015-04-01

    In this paper we introduced the quantum mechanics on fractal time-space. In a suggested formalism the time and space vary on Cantor-set and Von-Koch curve, respectively. Using Feynman path method in quantum mechanics and F α -calculus we find Schrëdinger equation on on fractal time-space. The Hamiltonian and momentum fractal operator has been indicated. More, the continuity equation and the probability density is given in view of F α -calculus.

  16. A Haptic Glove as a Tactile-Vision Sensory Substitution for Wayfinding.

    ERIC Educational Resources Information Center

    Zelek, John S.; Bromley, Sam; Asmar, Daniel; Thompson, David

    2003-01-01

    A device that relays navigational information using a portable tactile glove and a wearable computer and camera system was tested with nine adults with visual impairments. Paths traversed by subjects negotiating an obstacle course were not qualitatively different from paths produced with existing wayfinding devices and hitting probabilities were…

  17. Path connectivity based spectral defragmentation in flexible bandwidth networks.

    PubMed

    Wang, Ying; Zhang, Jie; Zhao, Yongli; Zhang, Jiawei; Zhao, Jie; Wang, Xinbo; Gu, Wanyi

    2013-01-28

    Optical networks with flexible bandwidth provisioning have become a very promising networking architecture. It enables efficient resource utilization and supports heterogeneous bandwidth demands. In this paper, two novel spectrum defragmentation approaches, i.e. Maximum Path Connectivity (MPC) algorithm and Path Connectivity Triggering (PCT) algorithm, are proposed based on the notion of Path Connectivity, which is defined to represent the maximum variation of node switching ability along the path in flexible bandwidth networks. A cost-performance-ratio based profitability model is given to denote the prons and cons of spectrum defragmentation. We compare these two proposed algorithms with non-defragmentation algorithm in terms of blocking probability. Then we analyze the differences of defragmentation profitability between MPC and PCT algorithms.

  18. Determination of drill paths for percutaneous cochlear access accounting for target positioning error

    NASA Astrophysics Data System (ADS)

    Noble, Jack H.; Warren, Frank M.; Labadie, Robert F.; Dawant, Benoit; Fitzpatrick, J. Michael

    2007-03-01

    In cochlear implant surgery an electrode array is permanently implanted to stimulate the auditory nerve and allow deaf people to hear. Current surgical techniques require wide excavation of the mastoid region of the temporal bone and one to three hours time to avoid damage to vital structures. Recently a far less invasive approach has been proposed-percutaneous cochlear access, in which a single hole is drilled from skull surface to the cochlea. The drill path is determined by attaching a fiducial system to the patient's skull and then choosing, on a pre-operative CT, an entry point and a target point. The drill is advanced to the target, the electrodes placed through the hole, and a stimulator implanted at the surface of the skull. The major challenge is the determination of a safe and effective drill path, which with high probability avoids specific vital structures-the facial nerve, the ossicles, and the external ear canal-and arrives at the basal turn of the cochlea. These four features lie within a few millimeters of each other, the drill is one millimeter in diameter, and errors in the determination of the target position are on the order of 0.5mm root-mean square. Thus, path selection is both difficult and critical to the success of the surgery. This paper presents a method for finding optimally safe and effective paths while accounting for target positioning error.

  19. Shrunk loop theorem for the topology probabilities of closed Brownian (or Feynman) paths on the twice punctured plane

    NASA Astrophysics Data System (ADS)

    Giraud, O.; Thain, A.; Hannay, J. H.

    2004-02-01

    The shrunk loop theorem proved here is an integral identity which facilitates the calculation of the relative probability (or probability amplitude) of any given topology that a free, closed Brownian (or Feynman) path of a given 'duration' might have on the twice punctured plane (plane with two marked points). The result is expressed as a 'scattering' series of integrals of increasing dimensionality based on the maximally shrunk version of the path. Physically, this applies in different contexts: (i) the topology probability of a closed ideal polymer chain on a plane with two impassable points, (ii) the trace of the Schrödinger Green function, and thence spectral information, in the presence of two Aharonov-Bohm fluxes and (iii) the same with two branch points of a Riemann surface instead of fluxes. Our theorem starts from the Stovicek scattering expansion for the Green function in the presence of two Aharonov-Bohm flux lines, which itself is based on the famous Sommerfeld one puncture point solution of 1896 (the one puncture case has much easier topology, just one winding number). Stovicek's expansion itself can supply the results at the expense of choosing a base point on the loop and then integrating it away. The shrunk loop theorem eliminates this extra two-dimensional integration, distilling the topology from the geometry.

  20. H theorem for generalized entropic forms within a master-equation framework

    NASA Astrophysics Data System (ADS)

    Casas, Gabriela A.; Nobre, Fernando D.; Curado, Evaldo M. F.

    2016-03-01

    The H theorem is proven for generalized entropic forms, in the case of a discrete set of states. The associated probability distributions evolve in time according to a master equation, for which the corresponding transition rates depend on these entropic forms. An important equation describing the time evolution of the transition rates and probabilities in such a way as to drive the system towards an equilibrium state is found. In the particular case of Boltzmann-Gibbs entropy, it is shown that this equation is satisfied in the microcanonical ensemble only for symmetric probability transition rates, characterizing a single path to the equilibrium state. This equation fulfils the proof of the H theorem for generalized entropic forms, associated with systems characterized by complex dynamics, e.g., presenting nonsymmetric probability transition rates and more than one path towards the same equilibrium state. Some examples considering generalized entropies of the literature are discussed, showing that they should be applicable to a wide range of natural phenomena, mainly those within the realm of complex systems.

  1. Theoretical Analysis of Rain Attenuation Probability

    NASA Astrophysics Data System (ADS)

    Roy, Surendra Kr.; Jha, Santosh Kr.; Jha, Lallan

    2007-07-01

    Satellite communication technologies are now highly developed and high quality, distance-independent services have expanded over a very wide area. As for the system design of the Hokkaido integrated telecommunications(HIT) network, it must first overcome outages of satellite links due to rain attenuation in ka frequency bands. In this paper theoretical analysis of rain attenuation probability on a slant path has been made. The formula proposed is based Weibull distribution and incorporates recent ITU-R recommendations concerning the necessary rain rates and rain heights inputs. The error behaviour of the model was tested with the loading rain attenuation prediction model recommended by ITU-R for large number of experiments at different probability levels. The novel slant path rain attenuastion prediction model compared to the ITU-R one exhibits a similar behaviour at low time percentages and a better root-mean-square error performance for probability levels above 0.02%. The set of presented models exhibits the advantage of implementation with little complexity and is considered useful for educational and back of the envelope computations.

  2. The Misapplication of Probability Theory in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Racicot, Ronald

    2014-03-01

    This article is a revision of two papers submitted to the APS in the past two and a half years. In these papers, arguments and proofs are summarized for the following: (1) The wrong conclusion by EPR that Quantum Mechanics is incomplete, perhaps requiring the addition of ``hidden variables'' for completion. Theorems that assume such ``hidden variables,'' such as Bell's theorem, are also wrong. (2) Quantum entanglement is not a realizable physical phenomenon and is based entirely on assuming a probability superposition model for quantum spin. Such a model directly violates conservation of angular momentum. (3) Simultaneous multiple-paths followed by a quantum particle traveling through space also cannot possibly exist. Besides violating Noether's theorem, the multiple-paths theory is based solely on probability calculations. Probability calculations by themselves cannot possibly represent simultaneous physically real events. None of the reviews of the submitted papers actually refuted the arguments and evidence that was presented. These analyses should therefore be carefully evaluated since the conclusions reached have such important impact in quantum mechanics and quantum information theory.

  3. A scaling law for random walks on networks

    PubMed Central

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-01-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics. PMID:25311870

  4. A scaling law for random walks on networks

    NASA Astrophysics Data System (ADS)

    Perkins, Theodore J.; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-01

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  5. A scaling law for random walks on networks.

    PubMed

    Perkins, Theodore J; Foxall, Eric; Glass, Leon; Edwards, Roderick

    2014-10-14

    The dynamics of many natural and artificial systems are well described as random walks on a network: the stochastic behaviour of molecules, traffic patterns on the internet, fluctuations in stock prices and so on. The vast literature on random walks provides many tools for computing properties such as steady-state probabilities or expected hitting times. Previously, however, there has been no general theory describing the distribution of possible paths followed by a random walk. Here, we show that for any random walk on a finite network, there are precisely three mutually exclusive possibilities for the form of the path distribution: finite, stretched exponential and power law. The form of the distribution depends only on the structure of the network, while the stepping probabilities control the parameters of the distribution. We use our theory to explain path distributions in domains such as sports, music, nonlinear dynamics and stochastic chemical kinetics.

  6. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  7. Transition path time distributions

    NASA Astrophysics Data System (ADS)

    Laleman, M.; Carlon, E.; Orland, H.

    2017-12-01

    Biomolecular folding, at least in simple systems, can be described as a two state transition in a free energy landscape with two deep wells separated by a high barrier. Transition paths are the short part of the trajectories that cross the barrier. Average transition path times and, recently, their full probability distribution have been measured for several biomolecular systems, e.g., in the folding of nucleic acids or proteins. Motivated by these experiments, we have calculated the full transition path time distribution for a single stochastic particle crossing a parabolic barrier, including inertial terms which were neglected in previous studies. These terms influence the short time scale dynamics of a stochastic system and can be of experimental relevance in view of the short duration of transition paths. We derive the full transition path time distribution as well as the average transition path times and discuss the similarities and differences with the high friction limit.

  8. A Foreign Object Damage Event Detector Data Fusion System for Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Turso, James A.; Litt, Jonathan S.

    2004-01-01

    A Data Fusion System designed to provide a reliable assessment of the occurrence of Foreign Object Damage (FOD) in a turbofan engine is presented. The FOD-event feature level fusion scheme combines knowledge of shifts in engine gas path performance obtained using a Kalman filter, with bearing accelerometer signal features extracted via wavelet analysis, to positively identify a FOD event. A fuzzy inference system provides basic probability assignments (bpa) based on features extracted from the gas path analysis and bearing accelerometers to a fusion algorithm based on the Dempster-Shafer-Yager Theory of Evidence. Details are provided on the wavelet transforms used to extract the foreign object strike features from the noisy data and on the Kalman filter-based gas path analysis. The system is demonstrated using a turbofan engine combined-effects model (CEM), providing both gas path and rotor dynamic structural response, and is suitable for rapid-prototyping of control and diagnostic systems. The fusion of the disparate data can provide significantly more reliable detection of a FOD event than the use of either method alone. The use of fuzzy inference techniques combined with Dempster-Shafer-Yager Theory of Evidence provides a theoretical justification for drawing conclusions based on imprecise or incomplete data.

  9. Studies of uncontrolled air traffic patterns, phase 1

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.; Scharf, L. L.; Ruedger, W. H.; Modi, J. A.; Wheelock, S. L.; Davis, C. M.

    1975-01-01

    The general aviation air traffic flow patterns at uncontrolled airports are investigated and analyzed and traffic pattern concepts are developed to minimize the midair collision hazard in uncontrolled airspace. An analytical approach to evaluate midair collision hazard probability as a function of traffic densities is established which is basically independent of path structure. Two methods of generating space-time interrelationships between terminal area aircraft are presented; one is a deterministic model to generate pseudorandom aircraft tracks, the other is a statistical model in preliminary form. Some hazard measures are presented for selected traffic densities. It is concluded that the probability of encountering a hazard should be minimized independently of any other considerations and that the number of encounters involving visible-avoidable aircraft should be maximized at the expense of encounters in other categories.

  10. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  11. Mechanism of the Cassie-Wenzel transition via the atomistic and continuum string methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giacomello, Alberto, E-mail: alberto.giacomello@uniroma1.it; Casciola, Carlo Massimo; Meloni, Simone, E-mail: simone.meloni@epfl.ch

    2015-03-14

    The string method is a general and flexible strategy to compute the most probable transition path for an activated process (rare event). We apply here the atomistic string method in the density field to the Cassie-Wenzel transition, a central problem in the field of superhydrophobicity. We discuss in detail the mechanism of wetting of a submerged hydrophobic cavity of nanometer size and its dependence on the geometry of the cavity. Furthermore, we analyze the algorithmic analogies between the continuum “interface” string method and CREaM [Giacomello et al., Phys. Rev. Lett. 109, 226102 (2012)], a method inspired by the string thatmore » allows for a faster and simpler computation of the mechanism and of the free-energy profiles of the wetting process.« less

  12. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  13. Hierarchical folding free energy landscape of HP35 revealed by most probable path clustering.

    PubMed

    Jain, Abhinav; Stock, Gerhard

    2014-07-17

    Adopting extensive molecular dynamics simulations of villin headpiece protein (HP35) by Shaw and co-workers, a detailed theoretical analysis of the folding of HP35 is presented. The approach is based on the recently proposed most probable path algorithm which identifies the metastable states of the system, combined with dynamical coring of these states in order to obtain a consistent Markov state model. The method facilitates the construction of a dendrogram associated with the folding free-energy landscape of HP35, which reveals a hierarchical funnel structure and shows that the native state is rather a kinetic trap than a network hub. The energy landscape of HP35 consists of the entropic unfolded basin U, where the prestructuring of the protein takes place, the intermediate basin I, which is connected to U via the rate-limiting U → I transition state reflecting the formation of helix-1, and the native basin N, containing a state close to the NMR structure and a native-like state that exhibits enhanced fluctuations of helix-3. The model is in line with recent experimental observations that the intermediate and native states differ mostly in their dynamics (locked vs unlocked states). Employing dihedral angle principal component analysis, subdiffusive motion on a multidimensional free-energy surface is found.

  14. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention.

    PubMed

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M; Stuart, Elizabeth A

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration.

  15. Search Path Evaluation Incorporating Object Placement Structure

    DTIC Science & Technology

    2007-12-20

    the probability of the set complement of this event: Pr(Ed) = 1 - kP PC (83) (k,t)I iIG Equation (83) provides the probability that if there is an...Networks," to appear in IEEE Transactions on Aerospace and Electronic Systems. 3. B. G. Koopman, Search and Screening: General Principles and Historical

  16. Limited-path-length entanglement percolation in quantum complex networks

    NASA Astrophysics Data System (ADS)

    Cuquet, Martí; Calsamiglia, John

    2011-03-01

    We study entanglement distribution in quantum complex networks where nodes are connected by bipartite entangled states. These networks are characterized by a complex structure, which dramatically affects how information is transmitted through them. For pure quantum state links, quantum networks exhibit a remarkable feature absent in classical networks: it is possible to effectively rewire the network by performing local operations on the nodes. We propose a family of such quantum operations that decrease the entanglement percolation threshold of the network and increase the size of the giant connected component. We provide analytic results for complex networks with an arbitrary (uncorrelated) degree distribution. These results are in good agreement with numerical simulations, which also show enhancement in correlated and real-world networks. The proposed quantum preprocessing strategies are not robust in the presence of noise. However, even when the links consist of (noisy) mixed-state links, one can send quantum information through a connecting path with a fidelity that decreases with the path length. In this noisy scenario, complex networks offer a clear advantage over regular lattices, namely, the fact that two arbitrary nodes can be connected through a relatively small number of steps, known as the small-world effect. We calculate the probability that two arbitrary nodes in the network can successfully communicate with a fidelity above a given threshold. This amounts to working out the classical problem of percolation with a limited path length. We find that this probability can be significant even for paths limited to few connections and that the results for standard (unlimited) percolation are soon recovered if the path length exceeds by a finite amount the average path length, which in complex networks generally scales logarithmically with the size of the network.

  17. Defect-free atomic array formation using the Hungarian matching algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Woojun; Kim, Hyosub; Ahn, Jaewook

    2017-05-01

    Deterministic loading of single atoms onto arbitrary two-dimensional lattice points has recently been demonstrated, where by dynamically controlling the optical-dipole potential, atoms from a probabilistically loaded lattice were relocated to target lattice points to form a zero-entropy atomic lattice. In this atom rearrangement, how to pair atoms with the target sites is a combinatorial optimization problem: brute-force methods search all possible combinations so the process is slow, while heuristic methods are time efficient but optimal solutions are not guaranteed. Here, we use the Hungarian matching algorithm as a fast and rigorous alternative to this problem of defect-free atomic lattice formation. Our approach utilizes an optimization cost function that restricts collision-free guiding paths so that atom loss due to collision is minimized during rearrangement. Experiments were performed with cold rubidium atoms that were trapped and guided with holographically controlled optical-dipole traps. The result of atom relocation from a partially filled 7 ×7 lattice to a 3 ×3 target lattice strongly agrees with the theoretical analysis: using the Hungarian algorithm minimizes the collisional and trespassing paths and results in improved performance, with over 50% higher success probability than the heuristic shortest-move method.

  18. Revisiting the finite temperature string method for the calculation of reaction tubes and free energies

    NASA Astrophysics Data System (ADS)

    Vanden-Eijnden, Eric; Venturoli, Maddalena

    2009-05-01

    An improved and simplified version of the finite temperature string (FTS) method [W. E, W. Ren, and E. Vanden-Eijnden, J. Phys. Chem. B 109, 6688 (2005)] is proposed. Like the original approach, the new method is a scheme to calculate the principal curves associated with the Boltzmann-Gibbs probability distribution of the system, i.e., the curves which are such that their intersection with the hyperplanes perpendicular to themselves coincides with the expected position of the system in these planes (where perpendicular is understood with respect to the appropriate metric). Unlike more standard paths such as the minimum energy path or the minimum free energy path, the location of the principal curve depends on global features of the energy or the free energy landscapes and thereby may remain appropriate in situations where the landscape is rough on the thermal energy scale and/or entropic effects related to the width of the reaction channels matter. Instead of using constrained sampling in hyperplanes as in the original FTS, the new method calculates the principal curve via sampling in the Voronoi tessellation whose generating points are the discretization points along this curve. As shown here, this modification results in greater algorithmic simplicity. As a by-product, it also gives the free energy associated with the Voronoi tessellation. The new method can be applied both in the original Cartesian space of the system or in a set of collective variables. We illustrate FTS on test-case examples and apply it to the study of conformational transitions of the nitrogen regulatory protein C receiver domain using an elastic network model and to the isomerization of solvated alanine dipeptide.

  19. Slant path rain attenuation and path diversity statistics obtained through radar modeling of rain structure

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1984-01-01

    Single and joint terminal slant path attenuation statistics at frequencies of 28.56 and 19.04 GHz have been derived, employing a radar data base obtained over a three-year period at Wallops Island, VA. Statistics were independently obtained for path elevation angles of 20, 45, and 90 deg for purposes of examining how elevation angles influences both single-terminal and joint probability distributions. Both diversity gains and autocorrelation function dependence on site spacing and elevation angles were determined employing the radar modeling results. Comparisons with other investigators are presented. An independent path elevation angle prediction technique was developed and demonstrated to fit well with the radar-derived single and joint terminal radar-derived cumulative fade distributions at various elevation angles.

  20. Response statistics of rotating shaft with non-linear elastic restoring forces by path integration

    NASA Astrophysics Data System (ADS)

    Gaidai, Oleg; Naess, Arvid; Dimentberg, Michael

    2017-07-01

    Extreme statistics of random vibrations is studied for a Jeffcott rotor under uniaxial white noise excitation. Restoring force is modelled as elastic non-linear; comparison is done with linearized restoring force to see the force non-linearity effect on the response statistics. While for the linear model analytical solutions and stability conditions are available, it is not generally the case for non-linear system except for some special cases. The statistics of non-linear case is studied by applying path integration (PI) method, which is based on the Markov property of the coupled dynamic system. The Jeffcott rotor response statistics can be obtained by solving the Fokker-Planck (FP) equation of the 4D dynamic system. An efficient implementation of PI algorithm is applied, namely fast Fourier transform (FFT) is used to simulate dynamic system additive noise. The latter allows significantly reduce computational time, compared to the classical PI. Excitation is modelled as Gaussian white noise, however any kind distributed white noise can be implemented with the same PI technique. Also multidirectional Markov noise can be modelled with PI in the same way as unidirectional. PI is accelerated by using Monte Carlo (MC) estimated joint probability density function (PDF) as initial input. Symmetry of dynamic system was utilized to afford higher mesh resolution. Both internal (rotating) and external damping are included in mechanical model of the rotor. The main advantage of using PI rather than MC is that PI offers high accuracy in the probability distribution tail. The latter is of critical importance for e.g. extreme value statistics, system reliability, and first passage probability.

  1. A bayesian approach for determining velocity and uncertainty estimates from seismic cone penetrometer testing or vertical seismic profiling data

    USGS Publications Warehouse

    Pidlisecky, Adam; Haines, S.S.

    2011-01-01

    Conventional processing methods for seismic cone penetrometer data present several shortcomings, most notably the absence of a robust velocity model uncertainty estimate. We propose a new seismic cone penetrometer testing (SCPT) data-processing approach that employs Bayesian methods to map measured data errors into quantitative estimates of model uncertainty. We first calculate travel-time differences for all permutations of seismic trace pairs. That is, we cross-correlate each trace at each measurement location with every trace at every other measurement location to determine travel-time differences that are not biased by the choice of any particular reference trace and to thoroughly characterize data error. We calculate a forward operator that accounts for the different ray paths for each measurement location, including refraction at layer boundaries. We then use a Bayesian inversion scheme to obtain the most likely slowness (the reciprocal of velocity) and a distribution of probable slowness values for each model layer. The result is a velocity model that is based on correct ray paths, with uncertainty bounds that are based on the data error. ?? NRC Research Press 2011.

  2. Rate Constant and Reaction Coordinate of Trp-Cage Folding in Explicit Water

    PubMed Central

    Juraszek, Jarek; Bolhuis, Peter G.

    2008-01-01

    We report rate constant calculations and a reaction coordinate analysis of the rate-limiting folding and unfolding process of the Trp-cage mini-protein in explicit solvent using transition interface sampling. Previous transition path sampling simulations revealed that in this (un)folding process the protein maintains its compact configuration, while a (de)increase of secondary structure is observed. The calculated folding rate agrees reasonably with experiment, while the unfolding rate is 10 times higher. We discuss possible origins for this mismatch. We recomputed the rates with the forward flux sampling method, and found a discrepancy of four orders of magnitude, probably caused by the method's higher sensitivity to the choice of order parameter with respect to transition interface sampling. Finally, we used the previously computed transition path-sampling ensemble to screen combinations of many order parameters for the best model of the reaction coordinate by employing likelihood maximization. We found that a combination of the root mean-square deviation of the helix and of the entire protein was, of the set of tried order parameters, the one that best describes the reaction coordination. PMID:18676648

  3. Smisc - A collection of miscellaneous functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landon Sego, PNNL

    2015-08-31

    A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less

  4. Exact transition probabilities in a 6-state Landau–Zener system with path interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinitsyn, Nikolai A.

    2015-04-23

    In this paper, we identify a nontrivial multistate Landau–Zener (LZ) model for which transition probabilities between any pair of diabatic states can be determined analytically and exactly. In the semiclassical picture, this model features the possibility of interference of different trajectories that connect the same initial and final states. Hence, transition probabilities are generally not described by the incoherent successive application of the LZ formula. Finally, we discuss reasons for integrability of this system and provide numerical tests of the suggested expression for the transition probability matrix.

  5. Computer simulation radiation damages in condensed matters

    NASA Astrophysics Data System (ADS)

    Kupchishin, A. I.; Kupchishin, A. A.; Voronova, N. A.; Kirdyashkin, V. I.; Gyngazov, V. A.

    2016-02-01

    As part of the cascade-probability method were calculated the energy spectra of primary knocked-out atoms and the concentration of radiation-induced defects in a number of metals irradiated by electrons. As follows from the formulas, the number of Frenkel pairs at a given depth depends on three variables having certain physical meaning: firstly, Cd (Ea h) is proportional to the average energy of the considered depth of the PKA (if it is higher, than the greater number of atoms it will displace); secondly is inversely proportional to the path length λ2 for the formation of the PKA (if λ1 is higher than is the smaller the probability of interaction) and thirdly is inversely proportional to Ed. In this case calculations are in satisfactory agreement with the experimental data (for example, copper and aluminum).

  6. Dynamics and Hall-edge-state mixing of localized electrons in a two-channel Mach-Zehnder interferometer

    NASA Astrophysics Data System (ADS)

    Bellentani, Laura; Beggi, Andrea; Bordone, Paolo; Bertoni, Andrea

    2018-05-01

    We present a numerical study of a multichannel electronic Mach-Zehnder interferometer, based on magnetically driven noninteracting edge states. The electron path is defined by a full-scale potential landscape on the two-dimensional electron gas at filling factor 2, assuming initially only the first Landau level as filled. We tailor the two beamsplitters with 50 % interchannel mixing and measure Aharonov-Bohm oscillations in the transmission probability of the second channel. We perform time-dependent simulations by solving the electron Schrödinger equation through a parallel implementation of the split-step Fourier method, and we describe the charge-carrier wave function as a Gaussian wave packet of edge states. We finally develop a simplified theoretical model to explain the features observed in the transmission probability, and we propose possible strategies to optimize gate performances.

  7. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  8. Salecker-Wigner-Peres clock, Feynman paths, and a tunneling time that should not exist

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.

    2017-08-01

    The Salecker-Wigner-Peres (SWP) clock is often used to determine the duration a quantum particle is supposed to spend in a specified region of space Ω . By construction, the result is a real positive number, and the method seems to avoid the difficulty of introducing complex time parameters, which arises in the Feynman paths approach. However, it tells little about the particle's motion. We investigate this matter further, and show that the SWP clock, like any other Larmor clock, correlates the rotation of its angular momentum with the durations τ , which the Feynman paths spend in Ω , thereby destroying interference between different durations. An inaccurate weakly coupled clock leaves the interference almost intact, and the need to resolve the resulting "which way?" problem is one of the main difficulties at the center of the "tunnelling time" controversy. In the absence of a probability distribution for the values of τ , the SWP results are expressed in terms of moduli of the "complex times," given by the weighted sums of the corresponding probability amplitudes. It is shown that overinterpretation of these results, by treating the SWP times as physical time intervals, leads to paradoxes and should be avoided. We also analyze various settings of the SWP clock, different calibration procedures, and the relation between the SWP results and the quantum dwell time. The cases of stationary tunneling and tunnel ionization are considered in some detail. Although our detailed analysis addresses only one particular definition of the duration of a tunneling process, it also points towards the impossibility of uniting various time parameters, which may occur in quantum theory, within the concept of a single tunnelling time.

  9. Quasi Path Restoration: A post-failure recovery scheme over pre-allocated backup resource for elastic optical networks

    NASA Astrophysics Data System (ADS)

    Yadav, Dharmendra Singh; Babu, Sarath; Manoj, B. S.

    2018-03-01

    Spectrum conflict during primary and backup routes assignment in elastic optical networks results in increased resource consumption as well as high Bandwidth Blocking Probability. In order to avoid such conflicts, we propose a new scheme, Quasi Path Restoration (QPR), where we divide the available spectrum into two: (1) primary spectrum (for primary routes allocation) and (2) backup spectrum (for rerouting the data on link failures). QPR exhibits three advantages over existing survivable strategies such as Shared Path Protection (SPP), Primary First Fit Backup Last Fit (PFFBLF), Jointly Releasing and re-establishment Defragmentation SPP (JRDSSPP), and Path Restoration (PR): (1) the conflict between primary and backup spectrum during route assignment is completely eliminated, (2) upon a link failure, connection recovery requires less backup resources compared to SPP, PFFBLF, and PR, and (3) availability of the same backup spectrum on each link improves the recovery guarantee. The performance of our scheme is analyzed with different primary backup spectrum partitions on varying connection-request demands and number of frequency slots. Our results show that QPR provides better connection recovery guarantee and Backup Resources Utilization (BRU) compared to bandwidth recovery of PR strategy. In addition, we compare QPR with Shared Path Protection and Primary First-Fit Backup Last Fit strategies in terms of Bandwidth Blocking Probability (BBP) and average frequency slots per connection request. Simulation results show that BBP of SPP, PFFBLF, and JRDSPP varies between 18.59% and 14.42%, while in QPR, BBP ranges from 2.55% to 17.76% for Cost239, NSFNET, and ARPANET topologies. Also, QPR provides bandwidth recovery between 93.61% and 100%, while in PR, the recovery ranges from 86.81% to 98.99%. It is evident from our analysis that QPR provides a reasonable trade-off between bandwidth blocking probability and connection recoverability.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelly, Kevin J.; Parke, Stephen J.

    Quantum mechanical interactions between neutrinos and matter along the path of propagation, the Wolfenstein matter effect, are of particular importance for the upcoming long-baseline neutrino oscillation experiments, specifically the Deep Underground Neutrino Experiment (DUNE). Here, we explore specifically what about the matter density profile can be measured by DUNE, considering both the shape and normalization of the profile between the neutrinos' origin and detection. Additionally, we explore the capability of a perturbative method for calculating neutrino oscillation probabilities and whether this method is suitable for DUNE. We also briefly quantitatively explore the ability of DUNE to measure the Earth's mattermore » density, and the impact of performing this measurement on measuring standard neutrino oscillation parameters.« less

  11. Application of the string method to the study of critical nuclei in capillary condensation.

    PubMed

    Qiu, Chunyin; Qian, Tiezheng; Ren, Weiqing

    2008-10-21

    We adopt a continuum description for liquid-vapor phase transition in the framework of mean-field theory and use the string method to numerically investigate the critical nuclei for capillary condensation in a slit pore. This numerical approach allows us to determine the critical nuclei corresponding to saddle points of the grand potential function in which the chemical potential is given in the beginning. The string method locates the minimal energy path (MEP), which is the most probable transition pathway connecting two metastable/stable states in configuration space. From the MEP, the saddle point is determined and the corresponding energy barrier also obtained (for grand potential). Moreover, the MEP shows how the new phase (liquid) grows out of the old phase (vapor) along the most probable transition pathway, from the birth of a critical nucleus to its consequent expansion. Our calculations run from partial wetting to complete wetting with a variable strength of attractive wall potential. In the latter case, the string method presents a unified way for computing the critical nuclei, from film formation at solid surface to bulk condensation via liquid bridge. The present application of the string method to the numerical study of capillary condensation shows the great power of this method in evaluating the critical nuclei in various liquid-vapor phase transitions.

  12. Nonequilibrium steady state of a weakly-driven Kardar–Parisi–Zhang equation

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Sasorov, Pavel V.; Vilenkin, Arkady

    2018-05-01

    We consider an infinite interface of d  >  2 dimensions, governed by the Kardar–Parisi–Zhang (KPZ) equation with a weak Gaussian noise which is delta-correlated in time and has short-range spatial correlations. We study the probability distribution of the interface height H at a point of the substrate, when the interface is initially flat. We show that, in stark contrast with the KPZ equation in d  <  2, this distribution approaches a non-equilibrium steady state. The time of relaxation toward this state scales as the diffusion time over the correlation length of the noise. We study the steady-state distribution using the optimal-fluctuation method. The typical, small fluctuations of height are Gaussian. For these fluctuations the activation path of the system coincides with the time-reversed relaxation path, and the variance of can be found from a minimization of the (nonlocal) equilibrium free energy of the interface. In contrast, the tails of are nonequilibrium, non-Gaussian and strongly asymmetric. To determine them we calculate, analytically and numerically, the activation paths of the system, which are different from the time-reversed relaxation paths. We show that the slower-decaying tail of scales as , while the faster-decaying tail scales as . The slower-decaying tail has important implications for the statistics of directed polymers in random potential.

  13. Locally enhanced sampling molecular dynamics study of the dioxygen transport in human cytoglobin.

    PubMed

    Orlowski, Slawomir; Nowak, Wieslaw

    2007-07-01

    Cytoglobin (Cyg)--a new member of the vertebrate heme globin family--is expressed in many tissues of the human body but its physiological role is still unclear. It may deliver oxygen under hypoxia, serve as a scavenger of reactive species or be involved in collagen synthesis. This protein is usually six-coordinated and binds oxygen by a displacement of the distal HisE7 imidazole. In this paper, the results of 60 ns molecular dynamics (MD) simulations of dioxygen diffusion inside Cyg matrix are discussed. In addition to a classical MD trajectory, an approximate Locally Enhanced Sampling (LES) method has been employed. Classical diffusion paths were carefully analyzed, five cavities in dynamical structures were determined and at least four distinct ligand exit paths were identified. The most probable exit/entry path is connected with a large tunnel present in Cyg. Several residues that are perhaps critical for kinetics of small gaseous diffusion were discovered. A comparison of gaseous ligand transport in Cyg and in the most studied heme protein myoglobin is presented. Implications of efficient oxygen transport found in Cyg to its possible physiological role are discussed.

  14. Network Design for Reliability and Resilience to Attack

    DTIC Science & Technology

    2014-03-01

    attacker can destroy n arcs in the network SPNI Shortest-Path Network-Interdiction problem TSP Traveling Salesman Problem UB upper bound UKR Ukraine...elimination from the traveling salesman problem (TSP). Literature calls a walk that does not contain a cycle a path [19]. The objective function in...arc lengths as random variables with known probability distributions. The m-median problem seeks to design a network with minimum average travel cost

  15. Are Tornadoes Getting Stronger?

    NASA Astrophysics Data System (ADS)

    Elsner, J.; Jagger, T.

    2013-12-01

    A cumulative logistic model for tornado damage category is developed and examined. Damage path length and width are significantly correlated to the odds of a tornado receiving the next highest damage category. Given values for the cube root of path length and square root of path width, the model predicts a probability for each category. The length and width coefficients are insensitive to the switch to the Enhanced Fujita (EF) scale and to distance from nearest city although these variables are statistically significant in the model. The width coefficient is sensitive to whether or not the tornado caused at least one fatality. This is likely due to the fact that the dimensions and characteristics of the damage path for such events are always based on ground surveys. The model predicted probabilities across the categories are then multiplied by the center wind speed from the categorical EF scale to obtain an estimate of the highest tornado wind speed on a continuous scale in units of meters per second. The estimated wind speeds correlate at a level of .82 (.46, .95) [95% confidence interval] to wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. More work needs to be done to understand the upward trends in path length and width. The increases lead to an apparent increase in tornado intensity across all EF categories.

  16. A Fast Numerical Method for Max-Convolution and the Application to Efficient Max-Product Inference in Bayesian Networks.

    PubMed

    Serang, Oliver

    2015-08-01

    Observations depending on sums of random variables are common throughout many fields; however, no efficient solution is currently known for performing max-product inference on these sums of general discrete distributions (max-product inference can be used to obtain maximum a posteriori estimates). The limiting step to max-product inference is the max-convolution problem (sometimes presented in log-transformed form and denoted as "infimal convolution," "min-convolution," or "convolution on the tropical semiring"), for which no O(k log(k)) method is currently known. Presented here is an O(k log(k)) numerical method for estimating the max-convolution of two nonnegative vectors (e.g., two probability mass functions), where k is the length of the larger vector. This numerical max-convolution method is then demonstrated by performing fast max-product inference on a convolution tree, a data structure for performing fast inference given information on the sum of n discrete random variables in O(nk log(nk)log(n)) steps (where each random variable has an arbitrary prior distribution on k contiguous possible states). The numerical max-convolution method can be applied to specialized classes of hidden Markov models to reduce the runtime of computing the Viterbi path from nk(2) to nk log(k), and has potential application to the all-pairs shortest paths problem.

  17. Some path-following techniques for solution of nonlinear equations and comparison with parametric differentiation

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Walters, R. W.

    1986-01-01

    Some path-following techniques are described and compared with other methods. Use of multipurpose techniques that can be used at more than one stage of the path-following computation results in a system that is relatively simple to understand, program, and use. Comparison of path-following methods with the method of parametric differentiation reveals definite advantages for the path-following methods. The fact that parametric differentiation has found a broader range of applications indicates that path-following methods have been underutilized.

  18. Tolerance analysis program

    NASA Technical Reports Server (NTRS)

    Watson, H. K.

    1971-01-01

    Digital computer program determines tolerance values of end to end signal chain or flow path, given preselected probability value. Technique is useful in the synthesis and analysis phases of subsystem design processes.

  19. Method for detecting and avoiding flight hazards

    NASA Astrophysics Data System (ADS)

    von Viebahn, Harro; Schiefele, Jens

    1997-06-01

    Today's aircraft equipment comprise several independent warning and hazard avoidance systems like GPWS, TCAS or weather radar. It is the pilot's task to monitor all these systems and take the appropriate action in case of an emerging hazardous situation. The developed method for detecting and avoiding flight hazards combines all potential external threats for an aircraft into a single system. It is based on an aircraft surrounding airspace model consisting of discrete volume elements. For each element of the volume the threat probability is derived or computed from sensor output, databases, or information provided via datalink. The position of the own aircraft is predicted by utilizing a probability distribution. This approach ensures that all potential positions of the aircraft within the near future are considered while weighting the most likely flight path. A conflict detection algorithm initiates an alarm in case the threat probability exceeds a threshold. An escape manoeuvre is generated taking into account all potential hazards in the vicinity, not only the one which caused the alarm. The pilot gets a visual information about the type, the locating, and severeness o the threat. The algorithm was implemented and tested in a flight simulator environment. The current version comprises traffic, terrain and obstacle hazards avoidance functions. Its general formulation allows an easy integration of e.g. weather information or airspace restrictions.

  20. Statistical mechanics of neocortical interactions. Derivation of short-term-memory capacity

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1984-06-01

    A theory developed by the author to describe macroscopic neocortical interactions demonstrates that empirical values of chemical and electrical parameters of synaptic interactions establish several minima of the path-integral Lagrangian as a function of excitatory and inhibitory columnar firings. The number of possible minima, their time scales of hysteresis and probable reverberations, and their nearest-neighbor columnar interactions are all consistent with well-established empirical rules of human short-term memory. Thus, aspects of conscious experience are derived from neuronal firing patterns, using modern methods of nonlinear nonequilibrium statistical mechanics to develop realistic explicit synaptic interactions.

  1. A new technique in the global reliability of cyclic communications network

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1989-01-01

    The global reliability of a communications network is the probability that given any pair of nodes, there exists a viable path between them. A characterization of connectivity, for a given class of networks, can enable one to find this reliability. Such a characterization is described for a useful class of undirected networks called daisy-chained or braided networks. This leads to a new method of quickly computing the global reliability of these networks. Asymptotic behavior in terms of component reliability is related to geometric properties of the given graph. Generalization of the technique is discussed.

  2. Bidirectional quantum teleportation of unknown photons using path-polarization intra-particle hybrid entanglement and controlled-unitary gates via cross-Kerr nonlinearity

    NASA Astrophysics Data System (ADS)

    Heo, Jino; Hong, Chang-Ho; Lim, Jong-In; Yang, Hyung-Jin

    2015-05-01

    We propose an arbitrary controlled-unitary (CU) gate and a bidirectional quantum teleportation (BQTP) scheme. The proposed CU gate utilizes photonic qubits (photons) with cross-Kerr nonlinearities (XKNLs), X-homodyne detectors, and linear optical elements, and consists of the consecutive operation of a controlled-path (C-path) gate and a gathering-path (G-path) gate. It is almost deterministic and feasible with current technology when a strong coherent state and weak XKNLs are employed. Based on the CU gate, we present a BQTP scheme that simultaneously teleports two unknown photons between distant users by transmitting only one photon in a path-polarization intra-particle hybrid entangled state. Consequently, it is possible to experimentally implement BQTP with a certain success probability using the proposed CU gate. Project supported by the Ministry of Science, ICT&Future Planning, Korea, under the C-ITRC (Convergence Information Technology Research Center) Support program (NIPA-2013-H0301-13-3007) supervised by the National IT Industry Promotion Agency.

  3. Mathematical model for path selection by ants between nest and food source.

    PubMed

    Bodnar, Marek; Okińczyc, Natalia; Vela-Pérez, M

    2017-03-01

    Several models have been proposed to describe the behavior of ants when moving from nest to food sources. Most of these studies where based on numerical simulations with no mathematical justification. In this paper, we propose a mechanism for the formation of paths of minimal length between two points by a collection of individuals undergoing reinforced random walks taking into account not only the lengths of the paths but also the angles (connected to the preference of ants to move along straight lines). Our model involves reinforcement (pheromone accumulation), persistence (tendency to preferably follow straight directions in absence of any external effect) and takes into account the bifurcation angles of each edge (represented by a probability of willingness of choosing the path with the smallest angle). We describe analytically the results for 2 ants and different path lengths and numerical simulations for several ants. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Identifying decohering paths in closed quantum systems

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1990-01-01

    A specific proposal is discussed for how to identify decohering paths in a wavefunction of the universe. The emphasis is on determining the correlations among subsystems and then considering how these correlations evolve. The proposal is similar to earlier ideas of Schroedinger and of Zeh, but in other ways it is closer to the decoherence functional of Griffiths, Omnes, and Gell-Mann and Hartle. There are interesting differences with each of these which are discussed. Once a given coarse-graining is chosen, the candidate paths are fixed in this scheme, and a single well defined number measures the degree of decoherence for each path. The normal probability sum rules are exactly obeyed (instantaneously) by these paths regardless of the level of decoherence. Also briefly discussed is how one might quantify some other aspects of classicality. The important role that concrete calculations play in testing this and other proposals is stressed.

  5. Investigating rare events with nonequilibrium work measurements. I. Nonequilibrium transition path probabilities

    NASA Astrophysics Data System (ADS)

    Moradi, Mahmoud; Sagui, Celeste; Roland, Christopher

    2014-01-01

    We have developed a formalism for investigating transition pathways and transition probabilities for rare events in biomolecular systems. In this paper, we set the theoretical framework for employing nonequilibrium work relations to estimate the relative reaction rates associated with different classes of transition pathways. Particularly, we derive an extension of Crook's transient fluctuation theorem, which relates the relative transition rates of driven systems in the forward and reverse directions, and allows for the calculation of these relative rates using work measurements (e.g., in Steered Molecular Dynamics). The formalism presented here can be combined with Transition Path Theory to relate the equilibrium and driven transition rates. The usefulness of this framework is illustrated by means of a Gaussian model and a driven proline dimer.

  6. Modeling Percolation in Polymer Nanocomposites by Stochastic Microstructuring

    PubMed Central

    Soto, Matias; Esteva, Milton; Martínez-Romero, Oscar; Baez, Jesús; Elías-Zúñiga, Alex

    2015-01-01

    A methodology was developed for the prediction of the electrical properties of carbon nanotube-polymer nanocomposites via Monte Carlo computational simulations. A two-dimensional microstructure that takes into account waviness, fiber length and diameter distributions is used as a representative volume element. Fiber interactions in the microstructure are identified and then modeled as an equivalent electrical circuit, assuming one-third metallic and two-thirds semiconductor nanotubes. Tunneling paths in the microstructure are also modeled as electrical resistors, and crossing fibers are accounted for by assuming a contact resistance associated with them. The equivalent resistor network is then converted into a set of linear equations using nodal voltage analysis, which is then solved by means of the Gauss–Jordan elimination method. Nodal voltages are obtained for the microstructure, from which the percolation probability, equivalent resistance and conductivity are calculated. Percolation probability curves and electrical conductivity values are compared to those found in the literature. PMID:28793594

  7. Cortico-Cortical, Cortico-Striatal, and Cortico-Thalamic White Matter Fiber Tracts Generated in the Macaque Brain via Dynamic Programming

    PubMed Central

    Lal, Rakesh M.; An, Michael; Poynton, Clare B.; Li, Muwei; Jiang, Hangyi; Oishi, Kenichi; Selemon, Lynn D.; Mori, Susumu; Miller, Michael I.

    2013-01-01

    Abstract Probabilistic methods have the potential to generate multiple and complex white matter fiber tracts in diffusion tensor imaging (DTI). Here, a method based on dynamic programming (DP) is introduced to reconstruct fibers pathways whose complex anatomical structures cannot be resolved beyond the resolution of standard DTI data. DP is based on optimizing a sequentially additive cost function derived from a Gaussian diffusion model whose covariance is defined by the diffusion tensor. DP is used to determine the optimal path between initial and terminal nodes by efficiently searching over all paths, connecting the nodes, and choosing the path in which the total probability is maximized. An ex vivo high-resolution scan of a macaque hemi-brain is used to demonstrate the advantages and limitations of DP. DP can generate fiber bundles between distant cortical areas (superior longitudinal fasciculi, arcuate fasciculus, uncinate fasciculus, and fronto-occipital fasciculus), neighboring cortical areas (dorsal and ventral banks of the principal sulcus), as well as cortical projections to the hippocampal formation (cingulum bundle), neostriatum (motor cortical projections to the putamen), thalamus (subcortical bundle), and hippocampal formation projections to the mammillary bodies via the fornix. Validation is established either by comparison with in vivo intracellular transport of horseradish peroxidase in another macaque monkey or by comparison with atlases. DP is able to generate known pathways, including crossing and kissing tracts. Thus, DP has the potential to enhance neuroimaging studies of cortical connectivity. PMID:23879573

  8. Heuristic reusable dynamic programming: efficient updates of local sequence alignment.

    PubMed

    Hong, Changjin; Tewfik, Ahmed H

    2009-01-01

    Recomputation of the previously evaluated similarity results between biological sequences becomes inevitable when researchers realize errors in their sequenced data or when the researchers have to compare nearly similar sequences, e.g., in a family of proteins. We present an efficient scheme for updating local sequence alignments with an affine gap model. In principle, using the previous matching result between two amino acid sequences, we perform a forward-backward alignment to generate heuristic searching bands which are bounded by a set of suboptimal paths. Given a correctly updated sequence, we initially predict a new score of the alignment path for each contour to select the best candidates among them. Then, we run the Smith-Waterman algorithm in this confined space. Furthermore, our heuristic alignment for an updated sequence shows that it can be further accelerated by using reusable dynamic programming (rDP), our prior work. In this study, we successfully validate "relative node tolerance bound" (RNTB) in the pruned searching space. Furthermore, we improve the computational performance by quantifying the successful RNTB tolerance probability and switch to rDP on perturbation-resilient columns only. In our searching space derived by a threshold value of 90 percent of the optimal alignment score, we find that 98.3 percent of contours contain correctly updated paths. We also find that our method consumes only 25.36 percent of the runtime cost of sparse dynamic programming (sDP) method, and to only 2.55 percent of that of a normal dynamic programming with the Smith-Waterman algorithm.

  9. Critique of Coleman's Theory of the Vanishing Cosmological Constant

    NASA Astrophysics Data System (ADS)

    Susskind, Leonard

    In these lectures I would like to review some of the criticisms to the Coleman worm-hole theory of the vanishing cosmological constant. In particular, I would like to focus on the most fundamental assumption that the path integral over topologies defines a probability for the cosmological constant which has the form EXP(A) with A being the Baum-Hawking-Coleman saddle point. Coleman argues that the euclideam path integral over all geometries may be dominated by special configurations which consist of large smooth "spheres" connected by any number of narrow wormholes. Formally summing up such configurations gives a very divergent expression for the path integral…

  10. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  11. On-chip immunomagnetic separation of bacteria by in-flow dynamic manipulation of paramagnetic beads

    NASA Astrophysics Data System (ADS)

    Ahmed, Shakil; Noh, Jong Wook; Hoyland, James; de Oliveira Hansen, Roana; Erdmann, Helmut; Rubahn, Horst-Günter

    2016-11-01

    Every year, millions of people all over the world fall ill due to the consumption of unsafe food, where consumption of contaminated and spoiled animal origin product is the main cause for diseases due to bacterial growth. This leads to an intense need for efficient methods for detection of food-related bacteria. In this work, we present a method for integration of immunomagnetic separation of bacteria into microfluidic technology by applying an alternating magnetic field, which manipulates the paramagnetic beads into a sinusoidal path across the whole microchannel, increasing the probability for bacteria capture. The optimum channel geometry, flow rate and alternating magnetic field frequency were investigated, resulting in a capture efficiency of 68 %.

  12. Quantization of Non-Lagrangian Systems

    NASA Astrophysics Data System (ADS)

    Kochan, Denis

    A novel method for quantization of non-Lagrangian (open) systems is proposed. It is argued that the essential object, which provides both classical and quantum evolution, is a certain canonical two-form defined in extended velocity space. In this setting classical dynamics is recovered from the stringy-type variational principle, which employs umbilical surfaces instead of histories of the system. Quantization is then accomplished in accordance with the introduced variational principle. The path integral for the transition probability amplitude (propagator) is rearranged to a surface functional integral. In the standard case of closed (Lagrangian) systems the presented method reduces to the standard Feynman's approach. The inverse problem of the calculus of variation, the problem of quantization ambiguity and the quantum mechanics in the presence of friction are analyzed in detail.

  13. Causal mediation analysis with a binary outcome and multiple continuous or ordinal mediators: Simulations and application to an alcohol intervention

    PubMed Central

    Nguyen, Trang Quynh; Webb-Vargas, Yenny; Koning, Ina M.; Stuart, Elizabeth A.

    2016-01-01

    We investigate a method to estimate the combined effect of multiple continuous/ordinal mediators on a binary outcome: 1) fit a structural equation model with probit link for the outcome and identity/probit link for continuous/ordinal mediators, 2) predict potential outcome probabilities, and 3) compute natural direct and indirect effects. Step 2 involves rescaling the latent continuous variable underlying the outcome to address residual mediator variance/covariance. We evaluate the estimation of risk-difference- and risk-ratio-based effects (RDs, RRs) using the ML, WLSMV and Bayes estimators in Mplus. Across most variations in path-coefficient and mediator-residual-correlation signs and strengths, and confounding situations investigated, the method performs well with all estimators, but favors ML/WLSMV for RDs with continuous mediators, and Bayes for RRs with ordinal mediators. Bayes outperforms WLSMV/ML regardless of mediator type when estimating RRs with small potential outcome probabilities and in two other special cases. An adolescent alcohol prevention study is used for illustration. PMID:27158217

  14. Assessment of risk to Boeing commerical transport aircraft from carbon fibers. [fiber release from graphite/epxoy materials

    NASA Technical Reports Server (NTRS)

    Clarke, C. A.; Brown, E. L.

    1980-01-01

    The possible effects of free carbon fibers on aircraft avionic equipment operation, removal costs, and safety were investigated. Possible carbon fiber flow paths, flow rates, and transfer functions into the Boeing 707, 727, 737, 747 aircraft and potentially vulnerable equipment were identified. Probabilities of equipment removal and probabilities of aircraft exposure to carbon fiber were derived.

  15. Comparison Of Reaction Barriers In Energy And Free Energy For Enzyme Catalysis

    NASA Astrophysics Data System (ADS)

    Andrés Cisneros, G.; Yang, Weitao

    Reaction paths on potential energy surfaces obtained from QM/MM calculations of enzymatic or solution reactions depend on the starting structure employed for the path calculations. The free energies associated with these paths should be more reliable for studying reaction mechanisms, because statistical averages are used. To investigate this, the role of enzyme environment fluctuations on reaction paths has been studied with an ab initio QM/MM method for the first step of the reaction catalyzed by 4-oxalocrotonate tautomerase (4OT). Four minimum energy paths (MEPs) are compared, which have been determined with two different methods. The first path (path A) has been determined with a procedure that combines the nudged elastic band (NEB) method and a second order parallel path optimizer recently developed in our group. The second path (path B) has also been determined by the combined procedure, however, the enzyme environment has been relaxed by molecular dynamics (MD) simulations. The third path (path C) has been determined with the coordinate driving (CD) method, using the enzyme environment from path B. We compare these three paths to a previously determined path (path D) determined with the CD method. In all four cases the QM/MM-FE method (Y. Zhang et al., JCP, 112, 3483) was employed to obtain the free energy barriers for all four paths. In the case of the combined procedure, the reaction path is approximated by a small number of images which are optimized to the MEP in parallel, which results in a reduced computational cost. However, this does not allow the FEP calculation on the MEP. In order to perform FEP calculations on these paths, we introduce a modification to the NEB method that enables the addition of as many extra images to the path as needed for the FEP calculations. The calculated potential energy barriers show differences in the activation barrier between the calculated paths of as much as 5.17 kcal/mol. However, the largest free energy barrier difference is 1.58 kcal/mol. These results show the importance of the inclusion of the environment fluctuation in the calculation of enzymatic activation barriers

  16. Combined statistical analysis of landslide release and propagation

    NASA Astrophysics Data System (ADS)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We quantify this relationship by a set of empirical curves. (6) Finally, we multiply the zonal release probability with the impact probability in order to estimate the combined impact probability for each pixel. We demonstrate the model with a 167 km² study area in Taiwan, using an inventory of landslides triggered by the typhoon Morakot. Analyzing the model results leads us to a set of key conclusions: (i) The average composite impact probability over the entire study area corresponds well to the density of observed landside pixels. Therefore we conclude that the method is valid in general, even though the concept of the zonal release probability bears some conceptual issues that have to be kept in mind. (ii) The parameters used as predictors cannot fully explain the observed distribution of landslides. The size of the release zone influences the composite impact probability to a larger degree than the pixel-based release probability. (iii) The prediction rate increases considerably when excluding the largest, deep-seated, landslides from the analysis. We conclude that such landslides are mainly related to geological features hardly reflected in the predictor layers used.

  17. Points on the Path to Probability.

    ERIC Educational Resources Information Center

    Kiernan, James F.

    2001-01-01

    Presents the problem of points and the development of the binomial triangle, or Pascal's triangle. Examines various attempts to solve this problem to give students insight into the nature of mathematical discovery. (KHR)

  18. Zero-Slack, Noncritical Paths

    ERIC Educational Resources Information Center

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  19. Study of a Terrain-Based Motion Estimation Model to Predict the Position of a Moving Target to Enhance Weapon Probability of Kill

    DTIC Science & Technology

    2017-09-01

    target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO

  20. The oilspill risk analysis model of the U. S. Geological Survey

    USGS Publications Warehouse

    Smith, R.A.; Slack, J.R.; Wyant, Timothy; Lanfear, K.J.

    1982-01-01

    The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method of route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occured), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)

  1. The oilspill risk analysis model of the U. S. Geological Survey

    USGS Publications Warehouse

    Smith, R.A.; Slack, J.R.; Wyant, T.; Lanfear, K.J.

    1980-01-01

    The U.S. Geological Survey has developed an oilspill risk analysis model to aid in estimating the environmental hazards of developing oil resources in Outer Continental Shelf (OCS) lease areas. The large, computerized model analyzes the probability of spill occurrence, as well as the likely paths or trajectories of spills in relation to the locations of recreational and biological resources which may be vulnerable. The analytical methodology can easily incorporate estimates of weathering rates , slick dispersion, and possible mitigating effects of cleanup. The probability of spill occurrence is estimated from information on the anticipated level of oil production and method and route of transport. Spill movement is modeled in Monte Carlo fashion with a sample of 500 spills per season, each transported by monthly surface current vectors and wind velocities sampled from 3-hour wind transition matrices. Transition matrices are based on historic wind records grouped in 41 wind velocity classes, and are constructed seasonally for up to six wind stations. Locations and monthly vulnerabilities of up to 31 categories of environmental resources are digitized within an 800,000 square kilometer study area. Model output includes tables of conditional impact probabilities (that is, the probability of hitting a target, given that a spill has occurred), as well as probability distributions for oilspills occurring and contacting environmental resources within preselected vulnerability time horizons. (USGS)

  2. A path integral approach to the Hodgkin-Huxley model

    NASA Astrophysics Data System (ADS)

    Baravalle, Roman; Rosso, Osvaldo A.; Montani, Fernando

    2017-11-01

    To understand how single neurons process sensory information, it is necessary to develop suitable stochastic models to describe the response variability of the recorded spike trains. Spikes in a given neuron are produced by the synergistic action of sodium and potassium of the voltage-dependent channels that open or close the gates. Hodgkin and Huxley (HH) equations describe the ionic mechanisms underlying the initiation and propagation of action potentials, through a set of nonlinear ordinary differential equations that approximate the electrical characteristics of the excitable cell. Path integral provides an adequate approach to compute quantities such as transition probabilities, and any stochastic system can be expressed in terms of this methodology. We use the technique of path integrals to determine the analytical solution driven by a non-Gaussian colored noise when considering the HH equations as a stochastic system. The different neuronal dynamics are investigated by estimating the path integral solutions driven by a non-Gaussian colored noise q. More specifically we take into account the correlational structures of the complex neuronal signals not just by estimating the transition probability associated to the Gaussian approach of the stochastic HH equations, but instead considering much more subtle processes accounting for the non-Gaussian noise that could be induced by the surrounding neural network and by feedforward correlations. This allows us to investigate the underlying dynamics of the neural system when different scenarios of noise correlations are considered.

  3. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE PAGES

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...

    2017-12-28

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  4. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    NASA Astrophysics Data System (ADS)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.

    2018-03-01

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.

  5. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  6. Evaluating methods for estimating space-time paths of individuals in calculating long-term personal exposure to air pollution

    NASA Astrophysics Data System (ADS)

    Schmitz, Oliver; Soenario, Ivan; Vaartjes, Ilonca; Strak, Maciek; Hoek, Gerard; Brunekreef, Bert; Dijst, Martin; Karssenberg, Derek

    2016-04-01

    Air pollution is one of the major concerns for human health. Associations between air pollution and health are often calculated using long-term (i.e. years to decades) information on personal exposure for each individual in a cohort. Personal exposure is the air pollution aggregated along the space-time path visited by an individual. As air pollution may vary considerably in space and time, for instance due to motorised traffic, the estimation of the spatio-temporal location of a persons' space-time path is important to identify the personal exposure. However, long term exposure is mostly calculated using the air pollution concentration at the x, y location of someone's home which does not consider that individuals are mobile (commuting, recreation, relocation). This assumption is often made as it is a major challenge to estimate space-time paths for all individuals in large cohorts, mostly because limited information on mobility of individuals is available. We address this issue by evaluating multiple approaches for the calculation of space-time paths, thereby estimating the personal exposure along these space-time paths with hyper resolution air pollution maps at national scale. This allows us to evaluate the effect of the space-time path and resulting personal exposure. Air pollution (e.g. NO2, PM10) was mapped for the entire Netherlands at a resolution of 5×5 m2 using the land use regression models developed in the European Study of Cohorts for Air Pollution Effects (ESCAPE, http://escapeproject.eu/) and the open source software PCRaster (http://www.pcraster.eu). The models use predictor variables like population density, land use, and traffic related data sets, and are able to model spatial variation and within-city variability of annual average concentration values. We approximated space-time paths for all individuals in a cohort using various aggregations, including those representing space-time paths as the outline of a persons' home or associated parcel of land, the 4 digit postal code area or neighbourhood of a persons' home, circular areas around the home, and spatial probability distributions of space-time paths during commuting. Personal exposure was estimated by averaging concentrations over these space-time paths, for each individual in a cohort. Preliminary results show considerable differences of a persons' exposure using these various approaches of space-time path aggregation, presumably because air pollution shows large variation over short distances.

  7. Treatment of Solid Rocket Motors that Complies with Established Protocols to Ensure Planetary Protection

    NASA Technical Reports Server (NTRS)

    Stefanski, Philip L.; Soler-Luna, Adrian

    2017-01-01

    This presentation discusses recent work being conducted by the National Aeronautics and Space Administration (NASA) at Marshall Space Flight Center (MSFC) to evaluate various methods that could be employed to provide for planetary protection of those solar system bodies that are candidates for extraterrestrial life, thus preventing contamination of such bodies. MSFC is presently involved in the development phase of the Europa Lander De-Orbital Stage (DOS) braking motor. In order to prevent bio-contamination of this Jovian satellite, three paths are currently being considered. The first is (1) Bio-Reduction of those microscopic organisms in or on the vehicle (in this case a solid rocket motor (SRM)) that might otherwise be transported during the mission. Possible methods being investigated include heat sterilization, application or incorporation of biocide materials, and irradiation. While each method can be made to work, effects on the SRM's components (propellant, liner, insulation, etc.) could well prove deleterious. A second path would be use of (2) Bio-Barrier material(s). So long as such barrier(s) can maintain their integrity, planetary protection should be afforded. Under the harsh conditions encountered during extended spaceflight (vacuum, temperature extremes, radiation), however, such barrier(s) could well experience a breach. Finally, a third path would be to perform (3) Pyrotechnic Sterilization of the SRM during its end-of-mission phase. Multiple pyrotechnic units would be triggered to ensure activation of such an event and provide for a final sterilization before vehicle impact. In light of Europa's stringent bio-reduction targets, the final and best choice to minimize risk will probably be some combination of the above.

  8. Quantum Theory of Wormholes

    NASA Astrophysics Data System (ADS)

    González-Díaz, Pedro F.

    We re-explore the effects of multiply-connected wormholes on ordinary matter at low energies. It is obtained that the path integral that describes these effects is given in terms of a Planckian probability distribution for the Coleman α-parameters, rather than a classical Gaussian distribution law. This implies that the path integral over all low-energy fields with the wormhole effective interactions can no longer vary continuously, and that the quantities α2 are interpretable as the momenta of a quantum field. Using the new result that, rather than being given in terms of the Coleman-Hawking probability, the Euclidean action must equal negative entropy, the model predicts a very small but still nonzero cosmological constant and quite reasonable values for the pion and neutrino masses. The divergence problems of Euclidean quantum gravity are also discussed in the light of the above results.

  9. Three paths toward the quantum angle operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gazeau, Jean Pierre, E-mail: gazeau@apc.univ-paris7.fr; Szafraniec, Franciszek Hugon, E-mail: franciszek.szafraniec@uj.edu.pl

    2016-12-15

    We examine mathematical questions around angle (or phase) operator associated with a number operator through a short list of basic requirements. We implement three methods of construction of quantum angle. The first one is based on operator theory and parallels the definition of angle for the upper half-circle through its cosine and completed by a sign inversion. The two other methods are integral quantization generalizing in a certain sense the Berezin–Klauder approaches. One method pertains to Weyl–Heisenberg integral quantization of the plane viewed as the phase space of the motion on the line. It depends on a family of “weight”more » functions on the plane. The third method rests upon coherent state quantization of the cylinder viewed as the phase space of the motion on the circle. The construction of these coherent states depends on a family of probability distributions on the line.« less

  10. System, apparatus and methods to implement high-speed network analyzers

    DOEpatents

    Ezick, James; Lethin, Richard; Ros-Giralt, Jordi; Szilagyi, Peter; Wohlford, David E

    2015-11-10

    Systems, apparatus and methods for the implementation of high-speed network analyzers are provided. A set of high-level specifications is used to define the behavior of the network analyzer emitted by a compiler. An optimized inline workflow to process regular expressions is presented without sacrificing the semantic capabilities of the processing engine. An optimized packet dispatcher implements a subset of the functions implemented by the network analyzer, providing a fast and slow path workflow used to accelerate specific processing units. Such dispatcher facility can also be used as a cache of policies, wherein if a policy is found, then packet manipulations associated with the policy can be quickly performed. An optimized method of generating DFA specifications for network signatures is also presented. The method accepts several optimization criteria, such as min-max allocations or optimal allocations based on the probability of occurrence of each signature input bit.

  11. Integrating Terrain Maps Into a Reactive Navigation Strategy

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna; Werger, Barry; Seraji, Homayoun

    2006-01-01

    An improved method of processing information for autonomous navigation of a robotic vehicle across rough terrain involves the integration of terrain maps into a reactive navigation strategy. Somewhat more precisely, the method involves the incorporation, into navigation logic, of data equivalent to regional traversability maps. The terrain characteristic is mapped using a fuzzy-logic representation of the difficulty of traversing the terrain. The method is robust in that it integrates a global path-planning strategy with sensor-based regional and local navigation strategies to ensure a high probability of success in reaching a destination and avoiding obstacles along the way. The sensor-based strategies use cameras aboard the vehicle to observe the regional terrain, defined as the area of the terrain that covers the immediate vicinity near the vehicle to a specified distance a few meters away.

  12. Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability.

    PubMed

    Lihe Zhang; Jianwu Ai; Bowen Jiang; Huchuan Lu; Xiukui Li

    2018-02-01

    In this paper, we propose a bottom-up saliency model based on absorbing Markov chain (AMC). First, a sparsely connected graph is constructed to capture the local context information of each node. All image boundary nodes and other nodes are, respectively, treated as the absorbing nodes and transient nodes in the absorbing Markov chain. Then, the expected number of times from each transient node to all other transient nodes can be used to represent the saliency value of this node. The absorbed time depends on the weights on the path and their spatial coordinates, which are completely encoded in the transition probability matrix. Considering the importance of this matrix, we adopt different hierarchies of deep features extracted from fully convolutional networks and learn a transition probability matrix, which is called learnt transition probability matrix. Although the performance is significantly promoted, salient objects are not uniformly highlighted very well. To solve this problem, an angular embedding technique is investigated to refine the saliency results. Based on pairwise local orderings, which are produced by the saliency maps of AMC and boundary maps, we rearrange the global orderings (saliency value) of all nodes. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art methods on six publicly available benchmark data sets.

  13. Randomized path optimization for thevMitigated counter detection of UAVS

    DTIC Science & Technology

    2017-06-01

    using Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the...Bayesian filtering . The KL divergence is used to compare the probability density of aircraft termination to a normal distribution around the true terminal...algorithm’s success. A recursive Bayesian filtering scheme is used to assimilate noisy measurements of the UAVs position to predict its terminal location. We

  14. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  15. A robust method to forecast volcanic ash clouds

    USGS Publications Warehouse

    Denlinger, Roger P.; Pavolonis, Mike; Sieglaff, Justin

    2012-01-01

    Ash clouds emanating from volcanic eruption columns often form trails of ash extending thousands of kilometers through the Earth's atmosphere, disrupting air traffic and posing a significant hazard to air travel. To mitigate such hazards, the community charged with reducing flight risk must accurately assess risk of ash ingestion for any flight path and provide robust forecasts of volcanic ash dispersal. In response to this need, a number of different transport models have been developed for this purpose and applied to recent eruptions, providing a means to assess uncertainty in forecasts. Here we provide a framework for optimal forecasts and their uncertainties given any model and any observational data. This involves random sampling of the probability distributions of input (source) parameters to a transport model and iteratively running the model with different inputs, each time assessing the predictions that the model makes about ash dispersal by direct comparison with satellite data. The results of these comparisons are embodied in a likelihood function whose maximum corresponds to the minimum misfit between model output and observations. Bayes theorem is then used to determine a normalized posterior probability distribution and from that a forecast of future uncertainty in ash dispersal. The nature of ash clouds in heterogeneous wind fields creates a strong maximum likelihood estimate in which most of the probability is localized to narrow ranges of model source parameters. This property is used here to accelerate probability assessment, producing a method to rapidly generate a prediction of future ash concentrations and their distribution based upon assimilation of satellite data as well as model and data uncertainties. Applying this method to the recent eruption of Eyjafjallajökull in Iceland, we show that the 3 and 6 h forecasts of ash cloud location probability encompassed the location of observed satellite-determined ash cloud loads, providing an efficient means to assess all of the hazards associated with these ash clouds.

  16. An improved reaction path optimization method using a chain of conformations

    NASA Astrophysics Data System (ADS)

    Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro

    2018-05-01

    The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.

  17. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks.

    PubMed

    Tang, Jiawei; Liu, Anfeng; Zhang, Jian; Xiong, Neal N; Zeng, Zhiwen; Wang, Tian

    2018-03-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.

  18. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks

    PubMed Central

    Tang, Jiawei; Zhang, Jian; Zeng, Zhiwen; Wang, Tian

    2018-01-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%. PMID:29494561

  19. Medical writing on an accelerated path in India.

    PubMed

    Shirke, Sarika

    2015-01-01

    The medical writing industry is on an upwards growth path in India. This is probably driven by an increasing urgency to have high-quality documents authored to support timely drug approvals, complemented by the realization that the competencies required are available in emerging geographies such as India. This article reviews the business landscape and the opportunities and challenges associated with outsourcing medical writing work India. It also analyzes the core competencies that a medical writer should possess and enlists various associations supporting learning in this domain.

  20. Scheduling of House Development Projects with CPM and PERT Method for Time Efficiency (Case Study: House Type 36)

    NASA Astrophysics Data System (ADS)

    Kholil, Muhammad; Nurul Alfa, Bonitasari; Hariadi, Madjumsyah

    2018-04-01

    Network planning is one of the management techniques used to plan and control the implementation of a project, which shows the relationship between activities. The objective of this research is to arrange network planning on house construction project on CV. XYZ and to know the role of network planning in increasing the efficiency of time so that can be obtained the optimal project completion period. This research uses descriptive method, where the data collected by direct observation to the company, interview, and literature study. The result of this research is optimal time planning in project work. Based on the results of the research, it can be concluded that the use of the both methods in scheduling of house construction project gives very significant effect on the completion time of the project. The company’s CPM (Critical Path Method) method can complete the project with 131 days, PERT (Program Evaluation Review and Technique) Method takes 136 days. Based on PERT calculation obtained Z = -0.66 or 0,2546 (from normal distribution table), and also obtained the value of probability or probability is 74,54%. This means that the possibility of house construction project activities can be completed on time is high enough. While without using both methods the project completion time takes 173 days. So using the CPM method, the company can save time up to 42 days and has time efficiency by using network planning.

  1. Orbital evolution of some Centaurs

    NASA Astrophysics Data System (ADS)

    Kovalenko, Nataliya; Babenko, Yuri; Churyumov, Klim

    2002-11-01

    In this work we investigated the dynamical evolution of Centaurs objects 2060 (Chiron), 5145 (Pholus), 7066 (Nessus), 8405 (Asbolus), 10199 (Chariklo), 10370 (Hylonome), and Scattered-Disk object 15874. We have carried out orbital integration of test particles with initial orbits similar to those of these objects. Calculations were produced for +/-600kyr-10Myr starting at epoch and using the implicit single sequence Everhart methods. 12 variational orbits for each of selected Centaurs also have been numerically integrated for +/-200 kyr toward the past and the future. The most probable paths were traced up to +/-1 Myr. The character of orbital elements changes and peculiarities of close approaches to giant planets are discussed.

  2. Probability density function of the intensity of a laser beam propagating in the maritime environment.

    PubMed

    Korotkova, Olga; Avramov-Zamurovic, Svetlana; Malek-Madani, Reza; Nelson, Charles

    2011-10-10

    A number of field experiments measuring the fluctuating intensity of a laser beam propagating along horizontal paths in the maritime environment is performed over sub-kilometer distances at the United States Naval Academy. Both above the ground and over the water links are explored. Two different detection schemes, one photographing the beam on a white board, and the other capturing the beam directly using a ccd sensor, gave consistent results. The probability density function (pdf) of the fluctuating intensity is reconstructed with the help of two theoretical models: the Gamma-Gamma and the Gamma-Laguerre, and compared with the intensity's histograms. It is found that the on-ground experimental results are in good agreement with theoretical predictions. The results obtained above the water paths lead to appreciable discrepancies, especially in the case of the Gamma-Gamma model. These discrepancies are attributed to the presence of the various scatterers along the path of the beam, such as water droplets, aerosols and other airborne particles. Our paper's main contribution is providing a methodology for computing the pdf function of the laser beam intensity in the maritime environment using field measurements.

  3. Statistical Modelling and Characterization of Experimental mm-Wave Indoor Channels for Future 5G Wireless Communication Networks

    PubMed Central

    Al-Samman, A. M.; Rahman, T. A.; Azmi, M. H.; Hindia, M. N.; Khan, I.; Hanafi, E.

    2016-01-01

    This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios. PMID:27654703

  4. Statistical Modelling and Characterization of Experimental mm-Wave Indoor Channels for Future 5G Wireless Communication Networks.

    PubMed

    Al-Samman, A M; Rahman, T A; Azmi, M H; Hindia, M N; Khan, I; Hanafi, E

    This paper presents an experimental characterization of millimeter-wave (mm-wave) channels in the 6.5 GHz, 10.5 GHz, 15 GHz, 19 GHz, 28 GHz and 38 GHz frequency bands in an indoor corridor environment. More than 4,000 power delay profiles were measured across the bands using an omnidirectional transmitter antenna and a highly directional horn receiver antenna for both co- and cross-polarized antenna configurations. This paper develops a new path-loss model to account for the frequency attenuation with distance, which we term the frequency attenuation (FA) path-loss model and introduce a frequency-dependent attenuation factor. The large-scale path loss was characterized based on both new and well-known path-loss models. A general and less complex method is also proposed to estimate the cross-polarization discrimination (XPD) factor of close-in reference distance with the XPD (CIX) and ABG with the XPD (ABGX) path-loss models to avoid the computational complexity of minimum mean square error (MMSE) approach. Moreover, small-scale parameters such as root mean square (RMS) delay spread, mean excess (MN-EX) delay, dispersion factors and maximum excess (MAX-EX) delay parameters were used to characterize the multipath channel dispersion. Multiple statistical distributions for RMS delay spread were also investigated. The results show that our proposed models are simpler and more physically-based than other well-known models. The path-loss exponents for all studied models are smaller than that of the free-space model by values in the range of 0.1 to 1.4 for all measured frequencies. The RMS delay spread values varied between 0.2 ns and 13.8 ns, and the dispersion factor values were less than 1 for all measured frequencies. The exponential and Weibull probability distribution models best fit the RMS delay spread empirical distribution for all of the measured frequencies in all scenarios.

  5. Virtual-Lattice Based Intrusion Detection Algorithm over Actuator-Assisted Underwater Wireless Sensor Networks

    PubMed Central

    Yan, Jing; Li, Xiaolei; Luo, Xiaoyuan; Guan, Xinping

    2017-01-01

    Due to the lack of a physical line of defense, intrusion detection becomes one of the key issues in applications of underwater wireless sensor networks (UWSNs), especially when the confidentiality has prime importance. However, the resource-constrained property of UWSNs such as sparse deployment and energy constraint makes intrusion detection a challenging issue. This paper considers a virtual-lattice-based approach to the intrusion detection problem in UWSNs. Different from most existing works, the UWSNs consist of two kinds of nodes, i.e., sensor nodes (SNs), which cannot move autonomously, and actuator nodes (ANs), which can move autonomously according to the performance requirement. With the cooperation of SNs and ANs, the intruder detection probability is defined. Then, a virtual lattice-based monitor (VLM) algorithm is proposed to detect the intruder. In order to reduce the redundancy of communication links and improve detection probability, an optimal and coordinative lattice-based monitor patrolling (OCLMP) algorithm is further provided for UWSNs, wherein an equal price search strategy is given for ANs to find the shortest patrolling path. Under VLM and OCLMP algorithms, the detection probabilities are calculated, while the topology connectivity can be guaranteed. Finally, simulation results are presented to show that the proposed method in this paper can improve the detection accuracy and save the energy consumption compared with the conventional methods. PMID:28531127

  6. CMPF: class-switching minimized pathfinding in metabolic networks.

    PubMed

    Lim, Kevin; Wong, Limsoon

    2012-01-01

    The metabolic network is an aggregation of enzyme catalyzed reactions that converts one compound to another. Paths in a metabolic network are a sequence of enzymes that describe how a chemical compound of interest can be produced in a biological system. As the number of such paths is quite large, many methods have been developed to score paths so that the k-shortest paths represent the set of paths that are biologically meaningful or efficient. However, these approaches do not consider whether the sequence of enzymes can be manufactured in the same pathway/species/localization. As a result, a predicted sequence might consist of groups of enzymes that operate in distinct pathway/species/localization and may not truly reflect the events occurring within cell. We propose a path weighting method CMPF (Class-switching Minimized Pathfinder) to search for routes in a metabolic network which minimizes pathway switching. In biological terms, a pathway is a series of chemical reactions which define a specific function (e.g. glycolysis). We conjecture that routes that cross many pathways are inefficient since different pathways define different metabolic functions. In addition, native routes are also well characterized within pathways, suggesting that reasonable paths should not involve too many pathway switches. Our method can be generalized when reactions participate in a class set (e.g., pathways, species or cellular localization) so that the paths predicted have minimal class crossings. We show that our method generates k-paths that involve the least number of class switching. In addition, we also show that native paths are recoverable and alternative paths deviates less from native paths compared to other methods. This suggests that paths ranked by our method could be a way to predict paths that are likely to occur in biological systems.

  7. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  8. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  9. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  10. Quantum gravity in timeless configuration space

    NASA Astrophysics Data System (ADS)

    Gomes, Henrique

    2017-12-01

    On the path towards quantum gravity we find friction between temporal relations in quantum mechanics (QM) (where they are fixed and field-independent), and in general relativity (where they are field-dependent and dynamic). This paper aims to attenuate that friction, by encoding gravity in the timeless configuration space of spatial fields with dynamics given by a path integral. The framework demands that boundary conditions for this path integral be uniquely given, but unlike other approaches where they are prescribed—such as the no-boundary and the tunneling proposals—here I postulate basic principles to identify boundary conditions in a large class of theories. Uniqueness arises only if a reduced configuration space can be defined and if it has a profoundly asymmetric fundamental structure. These requirements place strong restrictions on the field and symmetry content of theories encompassed here; shape dynamics is one such theory. When these constraints are met, any emerging theory will have a Born rule given merely by a particular volume element built from the path integral in (reduced) configuration space. Also as in other boundary proposals, Time, including space-time, emerges as an effective concept; valid for certain curves in configuration space but not assumed from the start. When some such notion of time becomes available, conservation of (positive) probability currents ensues. I show that, in the appropriate limits, a Schrödinger equation dictates the evolution of weakly coupled source fields on a classical gravitational background. Due to the asymmetry of reduced configuration space, these probabilities and currents avoid a known difficulty of standard WKB approximations for Wheeler DeWitt in minisuperspace: the selection of a unique Hamilton–Jacobi solution to serve as background. I illustrate these constructions with a simple example of a full quantum gravitational theory (i.e. not in minisuperspace) for which the formalism is applicable, and give a formula for calculating gravitational semi-classical relative probabilities in it.

  11. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  12. Multiscale/Multifunctional Probabilistic Composite Fatigue

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2010-01-01

    A multilevel (multiscale/multifunctional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.

  13. Exotic looped trajectories of photons in three-slit interference

    PubMed Central

    Magaña-Loaiza, Omar S; De Leon, Israel; Mirhosseini, Mohammad; Fickler, Robert; Safari, Akbar; Mick, Uwe; McIntyre, Brian; Banzer, Peter; Rodenburg, Brandon; Leuchs, Gerd; Boyd, Robert W.

    2016-01-01

    The validity of the superposition principle and of Born's rule are well-accepted tenants of quantum mechanics. Surprisingly, it has been predicted that the intensity pattern formed in a three-slit experiment is seemingly in contradiction with the most conventional form of the superposition principle when exotic looped trajectories are taken into account. However, the probability of observing such paths is typically very small, thus rendering them extremely difficult to measure. Here we confirm the validity of Born's rule and present the first experimental observation of exotic trajectories as additional paths for the light by directly measuring their contribution to the formation of optical interference fringes. We accomplish this by enhancing the electromagnetic near-fields in the vicinity of the slits through the excitation of surface plasmons. This process increases the probability of occurrence of these exotic trajectories, demonstrating that they are related to the near-field component of the photon's wavefunction. PMID:28008907

  14. Performance of multi-hop parallel free-space optical communication over gamma-gamma fading channel with pointing errors.

    PubMed

    Gao, Zhengguang; Liu, Hongzhan; Ma, Xiaoping; Lu, Wei

    2016-11-10

    Multi-hop parallel relaying is considered in a free-space optical (FSO) communication system deploying binary phase-shift keying (BPSK) modulation under the combined effects of a gamma-gamma (GG) distribution and misalignment fading. Based on the best path selection criterion, the cumulative distribution function (CDF) of this cooperative random variable is derived. Then the performance of this optical mesh network is analyzed in detail. A Monte Carlo simulation is also conducted to demonstrate the effectiveness of the results for the average bit error rate (ABER) and outage probability. The numerical result proves that it needs a smaller average transmitted optical power to achieve the same ABER and outage probability when using the multi-hop parallel network in FSO links. Furthermore, the system use of more number of hops and cooperative paths can improve the quality of the communication.

  15. Exotic looped trajectories of photons in three-slit interference.

    PubMed

    Magaña-Loaiza, Omar S; De Leon, Israel; Mirhosseini, Mohammad; Fickler, Robert; Safari, Akbar; Mick, Uwe; McIntyre, Brian; Banzer, Peter; Rodenburg, Brandon; Leuchs, Gerd; Boyd, Robert W

    2016-12-23

    The validity of the superposition principle and of Born's rule are well-accepted tenants of quantum mechanics. Surprisingly, it has been predicted that the intensity pattern formed in a three-slit experiment is seemingly in contradiction with the most conventional form of the superposition principle when exotic looped trajectories are taken into account. However, the probability of observing such paths is typically very small, thus rendering them extremely difficult to measure. Here we confirm the validity of Born's rule and present the first experimental observation of exotic trajectories as additional paths for the light by directly measuring their contribution to the formation of optical interference fringes. We accomplish this by enhancing the electromagnetic near-fields in the vicinity of the slits through the excitation of surface plasmons. This process increases the probability of occurrence of these exotic trajectories, demonstrating that they are related to the near-field component of the photon's wavefunction.

  16. Wormholes and the cosmological constant problem.

    NASA Astrophysics Data System (ADS)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  17. Using new edges for anomaly detection in computer networks

    DOEpatents

    Neil, Joshua Charles

    2017-07-04

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  18. Using new edges for anomaly detection in computer networks

    DOEpatents

    Neil, Joshua Charles

    2015-05-19

    Creation of new edges in a network may be used as an indication of a potential attack on the network. Historical data of a frequency with which nodes in a network create and receive new edges may be analyzed. Baseline models of behavior among the edges in the network may be established based on the analysis of the historical data. A new edge that deviates from a respective baseline model by more than a predetermined threshold during a time window may be detected. The new edge may be flagged as potentially anomalous when the deviation from the respective baseline model is detected. Probabilities for both new and existing edges may be obtained for all edges in a path or other subgraph. The probabilities may then be combined to obtain a score for the path or other subgraph. A threshold may be obtained by calculating an empirical distribution of the scores under historical conditions.

  19. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  20. Effective bandwidth guaranteed routing schemes for MPLS traffic engineering

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Jain, Nidhi

    2001-07-01

    In this work, we present online algorithms for dynamic routing bandwidth guaranteed label switched paths (LSPs) where LSP set-up requests (in terms of a pair of ingress and egress routers as well as its bandwidth requirement) arrive one by one and there is no a priori knowledge regarding future LSP set-up requests. In addition, we consider rerouting of LSPs in this work. Rerouting of LSPs has not been well studied in previous work on LSP routing. The need of LSP rerouting arises in a number of ways: occurrence of faults (link and/or node failures), re-optimization of existing LSPs' routes to accommodate traffic fluctuation, requests with higher priorities, and so on. We formulate the bandwidth guaranteed LSP routing with rerouting capability as a multi-commodity flow problem. The solution to this problem is used as the benchmark for comparing other computationally less costly algorithms studied in this paper. Furthermore, to more efficiently utilize the network resources, we propose online routing algorithms which route bandwidth demands over multiple paths at the ingress router to satisfy the customer requests while providing better service survivability. Traffic splitting and distribution over the multiple paths are carefully handled using table-based hashing schemes while the order of packets within a flow is preserved. Preliminary simulations are conducted to show the performance of different design choices and the effectiveness of the rerouting and multi-path routing algorithms in terms of LSP set-up request rejection probability and bandwidth blocking probability.

  1. Hydrogeological characterization of flow system in a karstic aquifer, Seymareh dam, Iran

    NASA Astrophysics Data System (ADS)

    Behrouj Peely, Ahmad; Mohammadi, Zargham; Raeisi, Ezzatollah; Solgi, Khashayar; Mosavi, Mohammad J.; Kamali, Majid

    2018-07-01

    In order to determine the characteristics of the flow system in a karstic aquifer, an extensive hydrogeological study includes dye tracing test was conducted. The aquifer suited left abutment of Seymareh Dam, in Ravandi Anticline and discharges by more than 50 springs in the southern flank. Flow system in the aquifer is mainly controlled by the reservoir of Seymareh Dam. Time variations of the spring discharge and water table in the observation wells were highly correlated with the reservoir water level. The average groundwater velocity ranges from 0.2 to more than 14 m/h based on the dye tracing test. The probable flow paths were differentiated in two groups including the flow paths in the northern and southern flanks of Ravandi Anticline. Types of groundwater flow in the proposed flow paths are determined as diffuse or conduit flow type considering groundwater velocity and shape of the breakthrough curves. An index is proposed for differentiation of diffuse and conduit flow system based on relationship of groundwater velocity and hydraulic gradient. Dominant geometry of the flow routs (e.g., conduit diameter and fracture aperture) is estimated for the groundwater flow paths toward the springs. Based on velocity variations and variance coefficient of the water table and discharge of springs on map view a major karst conduit was probably developed in the aquifer. This research emphasizes applying of an extensive hydrogeological study for characterization of flow system in the karst aquifer.

  2. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  3. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  4. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  5. Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

    NASA Astrophysics Data System (ADS)

    Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu

    Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.

  6. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  7. Epidemic extinction paths in complex networks

    NASA Astrophysics Data System (ADS)

    Hindes, Jason; Schwartz, Ira B.

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  8. Epidemic extinction paths in complex networks.

    PubMed

    Hindes, Jason; Schwartz, Ira B

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  9. GPU-Based Interactive Exploration and Online Probability Maps Calculation for Visualizing Assimilated Ocean Ensembles Data

    NASA Astrophysics Data System (ADS)

    Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.

    2016-02-01

    Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.

  10. Graph edit distance from spectral seriation.

    PubMed

    Robles-Kelly, Antonio; Hancock, Edwin R

    2005-03-01

    This paper is concerned with computing graph edit distance. One of the criticisms that can be leveled at existing methods for computing graph edit distance is that they lack some of the formality and rigor of the computation of string edit distance. Hence, our aim is to convert graphs to string sequences so that string matching techniques can be used. To do this, we use a graph spectral seriation method to convert the adjacency matrix into a string or sequence order. We show how the serial ordering can be established using the leading eigenvector of the graph adjacency matrix. We pose the problem of graph-matching as a maximum a posteriori probability (MAP) alignment of the seriation sequences for pairs of graphs. This treatment leads to an expression in which the edit cost is the negative logarithm of the a posteriori sequence alignment probability. We compute the edit distance by finding the sequence of string edit operations which minimizes the cost of the path traversing the edit lattice. The edit costs are determined by the components of the leading eigenvectors of the adjacency matrix and by the edge densities of the graphs being matched. We demonstrate the utility of the edit distance on a number of graph clustering problems.

  11. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  12. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  13. Population trends, survival, and sampling methodologies for a population of Rana draytonii

    USGS Publications Warehouse

    Fellers, Gary M.; Kleeman, Patrick M.; Miller, David A.W.; Halstead, Brian J.

    2017-01-01

    Estimating population trends provides valuable information for resource managers, but monitoring programs face trade-offs between the quality and quantity of information gained and the number of sites surveyed. We compared the effectiveness of monitoring techniques for estimating population trends of Rana draytonii (California Red-legged Frog) at Point Reyes National Seashore, California, USA, over a 13-yr period. Our primary goals were to: 1) estimate trends for a focal pond at Point Reyes National Seashore, and 2) evaluate whether egg mass counts could reliably estimate an index of abundance relative to more-intensive capture–mark–recapture methods. Capture–mark–recapture (CMR) surveys of males indicated a stable population from 2005 to 2009, despite low annual apparent survival (26.3%). Egg mass counts from 2000 to 2012 indicated that despite some large fluctuations, the breeding female population was generally stable or increasing, with annual abundance varying between 26 and 130 individuals. Minor modifications to egg mass counts, such as marking egg masses, can allow estimation of egg mass detection probabilities necessary to convert counts to abundance estimates, even when closure of egg mass abundance cannot be assumed within a breeding season. High egg mass detection probabilities (mean per-survey detection probability = 0.98 [0.89–0.99]) indicate that egg mass surveys can be an efficient and reliable method for monitoring population trends of federally threatened R. draytonii. Combining egg mass surveys to estimate trends at many sites with CMR methods to evaluate factors affecting adult survival at focal populations is likely a profitable path forward to enhance understanding and conservation of R. draytonii.

  14. Multi-Scale/Multi-Functional Probabilistic Composite Fatigue

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.

    2008-01-01

    A multi-level (multi-scale/multi-functional) evaluation is demonstrated by applying it to three different sample problems. These problems include the probabilistic evaluation of a space shuttle main engine blade, an engine rotor and an aircraft wing. The results demonstrate that the blade will fail at the highest probability path, the engine two-stage rotor will fail by fracture at the rim and the aircraft wing will fail at 109 fatigue cycles with a probability of 0.9967.

  15. Cargo Throughput and Survivability Trade-Offs in Force Sustainment Operations

    DTIC Science & Technology

    2008-06-01

    more correlation with direct human activity. Mines are able to simply ‘sit and wait,’ thus allowing for easier mathematical and statistical ...1.2) Since the ships will likely travel in groups along the same programmed GPS track, modeling several transitors to the identical path is assumed...setting of 1/2 was used for the actuation probability maximum. The ‘threat profile’ will give the probability that the nth transitor will hit a mine

  16. Robotic path-finding in inverse treatment planning for stereotactic radiosurgery with continuous dose delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandewouw, Marlee M., E-mail: marleev@mie.utoronto

    Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less

  17. Turbulence effects in a horizontal propagation path close to ground: implications for optics detection

    NASA Astrophysics Data System (ADS)

    Sjöqvist, Lars; Allard, Lars; Gustafsson, Ove; Henriksson, Markus; Pettersson, Magnus

    2011-11-01

    Atmospheric turbulence effects close to ground may affect the performance of laser based systems severely. The variations in the refractive index along the propagation path cause effects such as beam wander, intensity fluctuations (scintillations) and beam broadening. Typical geometries of interest for optics detection include nearly horizontal propagation paths close to the ground and up to kilometre distance to the target. The scintillations and beam wander affect the performance in terms of detection probability and false alarm rate. Of interest is to study the influence of turbulence in optics detection applications. In a field trial atmospheric turbulence effects along a 1 kilometre horizontal propagation path were studied using a diode laser with a rectangular beam profile operating at 0.8 micrometer wavelength. Single-path beam characteristics were registered and analysed using photodetectors arranged in horizontal and vertical directions. The turbulence strength along the path was determined using a scintillometer and single-point ultrasonic anemometers. Strong scintillation effects were observed as a function of the turbulence strength and amplitude characteristics were fitted to model distributions. In addition to the single-path analysis double-path measurements were carried out on different targets. Experimental results are compared with existing theoretical turbulence laser beam propagation models. The results show that influence from scintillations needs to be considered when predicting performance in optics detection applications.

  18. Medical writing on an accelerated path in India

    PubMed Central

    Shirke, Sarika

    2015-01-01

    The medical writing industry is on an upwards growth path in India. This is probably driven by an increasing urgency to have high-quality documents authored to support timely drug approvals, complemented by the realization that the competencies required are available in emerging geographies such as India. This article reviews the business landscape and the opportunities and challenges associated with outsourcing medical writing work India. It also analyzes the core competencies that a medical writer should possess and enlists various associations supporting learning in this domain. PMID:26229746

  19. Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.

    PubMed

    Suderman, Bethany L; Vasavada, Anita N

    2017-08-01

    Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). Moment arm estimates were also found to be significantly different among moment arm calculation methods for 11 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). In particular, using straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.

  20. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines

    PubMed Central

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints. PMID:26062092

  1. An Approach to Realizing Process Control for Underground Mining Operations of Mobile Machines.

    PubMed

    Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John

    2015-01-01

    The excavation and production in underground mines are complicated processes which consist of many different operations. The process of underground mining is considerably constrained by the geometry and geology of the mine. The various mining operations are normally performed in series at each working face. The delay of a single operation will lead to a domino effect, thus delay the starting time for the next process and the completion time of the entire process. This paper presents a new approach to the process control for underground mining operations, e.g. drilling, bolting, mucking. This approach can estimate the working time and its probability for each operation more efficiently and objectively by improving the existing PERT (Program Evaluation and Review Technique) and CPM (Critical Path Method). If the delay of the critical operation (which is on a critical path) inevitably affects the productivity of mined ore, the approach can rapidly assign mucking machines new jobs to increase this amount at a maximum level by using a new mucking algorithm under external constraints.

  2. The dissociative chemisorption of methane on Ni(100) and Ni(111): Classical and quantum studies based on the reaction path Hamiltonian

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mastromatteo, Michael; Jackson, Bret, E-mail: jackson@chem.umass.edu

    Electronic structure methods based on density functional theory are used to construct a reaction path Hamiltonian for CH{sub 4} dissociation on the Ni(100) and Ni(111) surfaces. Both quantum and quasi-classical trajectory approaches are used to compute dissociative sticking probabilities, including all molecular degrees of freedom and the effects of lattice motion. Both approaches show a large enhancement in sticking when the incident molecule is vibrationally excited, and both can reproduce the mode specificity observed in experiments. However, the quasi-classical calculations significantly overestimate the ground state dissociative sticking at all energies, and the magnitude of the enhancement in sticking with vibrationalmore » excitation is much smaller than that computed using the quantum approach or observed in the experiments. The origin of this behavior is an unphysical flow of zero point energy from the nine normal vibrational modes into the reaction coordinate, giving large values for reaction at energies below the activation energy. Perturbative assumptions made in the quantum studies are shown to be accurate at all energies studied.« less

  3. An improved least cost routing approach for WDM optical network without wavelength converters

    NASA Astrophysics Data System (ADS)

    Bonani, Luiz H.; Forghani-elahabad, Majid

    2016-12-01

    Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.

  4. [CALCULATION OF THE PROBABILITY OF METALS INPUT INTO AN ORGANISM WITH DRINKING POTABLE WATERS].

    PubMed

    Tunakova, Yu A; Fayzullin, R I; Valiev, V S

    2015-01-01

    The work was performed in framework of the State program for the improvement of the competitiveness of Kazan (Volga) Federal University among the world's leading research and education centers and subsidies unveiled to Kazan Federal University to perform public tasks in the field of scientific research. In the current methodological recommendations "Guide for assessing the risk to public health under the influence of chemicals that pollute the environment," P 2.1.10.1920-04 there is regulated the determination of quantitative and/or qualitative characteristics of the harmful effects to human health from exposure to environmental factors. We proposed to complement the methodological approaches presented in P 2.1.10.1920-04, with the estimation of the probability of pollutants input in the body with drinking water which is the greater, the higher the order of the excess of the actual concentrations of the substances in comparison with background concentrations. In the paper there is proposed a method of calculation of the probability of exceeding the actual concentrations of metal cations above the background in samples of drinking water consumed by the population, which were selected at the end points of consumption in houses and apartments, to accommodate the passage of secondary pollution ofwater pipelines and distributing paths. Research was performed on the example of Kazan, divided into zones. The calculation of probabilities was made with the use of Bayes' theorem.

  5. Intensity Modulated Radiation Therapy Dose Painting for Localized Prostate Cancer Using {sup 11}C-choline Positron Emission Tomography Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Joe H.; University of Melbourne, Victoria; Lim Joon, Daryl

    Purpose: To demonstrate the technical feasibility of intensity modulated radiation therapy (IMRT) dose painting using {sup 11}C-choline positron emission tomography PET scans in patients with localized prostate cancer. Methods and Materials: This was an RT planning study of 8 patients with prostate cancer who had {sup 11}C-choline PET scans prior to radical prostatectomy. Two contours were semiautomatically generated on the basis of the PET scans for each patient: 60% and 70% of the maximum standardized uptake values (SUV{sub 60%} and SUV{sub 70%}). Three IMRT plans were generated for each patient: PLAN{sub 78}, which consisted of whole-prostate radiation therapy to 78more » Gy; PLAN{sub 78-90}, which consisted of whole-prostate RT to 78 Gy, a boost to the SUV{sub 60%} to 84 Gy, and a further boost to the SUV{sub 70%} to 90 Gy; and PLAN{sub 72-90}, which consisted of whole-prostate RT to 72 Gy, a boost to the SUV{sub 60%} to 84 Gy, and a further boost to the SUV{sub 70%} to 90 Gy. The feasibility of these plans was judged by their ability to reach prescription doses while adhering to published dose constraints. Tumor control probabilities based on PET scan-defined volumes (TCP{sub PET}) and on prostatectomy-defined volumes (TCP{sub path}), and rectal normal tissue complication probabilities (NTCP) were compared between the plans. Results: All plans for all patients reached prescription doses while adhering to dose constraints. TCP{sub PET} values for PLAN{sub 78}, PLAN{sub 78-90}, and PLAN{sub 72-90} were 65%, 97%, and 96%, respectively. TCP{sub path} values were 71%, 97%, and 89%, respectively. Both PLAN{sub 78-90} and PLAN{sub 72-90} had significantly higher TCP{sub PET} (P=.002 and .001) and TCP{sub path} (P<.001 and .014) values than PLAN{sub 78}. PLAN{sub 78-90} and PLAN{sub 72-90} were not significantly different in terms of TCP{sub PET} or TCP{sub path}. There were no significant differences in rectal NTCPs between the 3 plans. Conclusions: IMRT dose painting for localized prostate cancer using {sup 11}C-choline PET scans is technically feasible. Dose painting results in higher TCPs without higher NTCPs.« less

  6. A Cooperative Search and Coverage Algorithm with Controllable Revisit and Connectivity Maintenance for Multiple Unmanned Aerial Vehicles.

    PubMed

    Liu, Zhong; Gao, Xiaoguang; Fu, Xiaowei

    2018-05-08

    In this paper, we mainly study a cooperative search and coverage algorithm for a given bounded rectangle region, which contains several unknown stationary targets, by a team of unmanned aerial vehicles (UAVs) with non-ideal sensors and limited communication ranges. Our goal is to minimize the search time, while gathering more information about the environment and finding more targets. For this purpose, a novel cooperative search and coverage algorithm with controllable revisit mechanism is presented. Firstly, as the representation of the environment, the cognitive maps that included the target probability map (TPM), the uncertain map (UM), and the digital pheromone map (DPM) are constituted. We also design a distributed update and fusion scheme for the cognitive map. This update and fusion scheme can guarantee that each one of the cognitive maps converges to the same one, which reflects the targets’ true existence or absence in each cell of the search region. Secondly, we develop a controllable revisit mechanism based on the DPM. This mechanism can concentrate the UAVs to revisit sub-areas that have a large target probability or high uncertainty. Thirdly, in the frame of distributed receding horizon optimizing, a path planning algorithm for the multi-UAVs cooperative search and coverage is designed. In the path planning algorithm, the movement of the UAVs is restricted by the potential fields to meet the requirements of avoiding collision and maintaining connectivity constraints. Moreover, using the minimum spanning tree (MST) topology optimization strategy, we can obtain a tradeoff between the search coverage enhancement and the connectivity maintenance. The feasibility of the proposed algorithm is demonstrated by comparison simulations by way of analyzing the effects of the controllable revisit mechanism and the connectivity maintenance scheme. The Monte Carlo method is employed to validate the influence of the number of UAVs, the sensing radius, the detection and false alarm probabilities, and the communication range on the proposed algorithm.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Liborio I., E-mail: liborio78@gmail.com

    A new Markov Chain Monte Carlo method for simulating the dynamics of particle systems characterized by hard-core interactions is introduced. In contrast to traditional Kinetic Monte Carlo approaches, where the state of the system is associated with minima in the energy landscape, in the proposed method, the state of the system is associated with the set of paths traveled by the atoms and the transition probabilities for an atom to be displaced are proportional to the corresponding velocities. In this way, the number of possible state-to-state transitions is reduced to a discrete set, and a direct link between the Montemore » Carlo time step and true physical time is naturally established. The resulting rejection-free algorithm is validated against event-driven molecular dynamics: the equilibrium and non-equilibrium dynamics of hard disks converge to the exact results with decreasing displacement size.« less

  8. Path Finding on High-Dimensional Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Díaz Leines, Grisell; Ensing, Bernd

    2012-07-01

    We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.

  9. Psychosocial Pathways to Sexually Transmitted Infection (STI) Risk Among Youth Transitioning Out of Foster Care: Evidence from a Longitudinal Cohort Study

    PubMed Central

    McCarty, Cari; Simoni, Jane; Dworsky, Amy; Courtney, Mark E.

    2013-01-01

    Purpose To test the fit of a theoretically driven conceptual model of pathways to STI risk among foster youth transitioning to adulthood. The model included: 1) historical abuse and foster care experiences, 2) mental health and attachment style in late adolescence, and 3) STI risk in young adulthood. Methods We used path analysis to analyze data from a longitudinal study of 732 youth transitioning out of foster care. Covariates included gender, race and an inverse probability weight. We also performed moderation analyses comparing models constrained and unconstrained by gender. Results Thirty percent reported they or a partner had been diagnosed with an STI. Probability of other measured STI risk behaviors ranged from 9% (having sex for money) to 79% (inconsistent condom use). Overall model fit was good (Standardized Root Mean Squared Residual of 0.026). Increased risk of oppositional/delinquent behaviors mediated an association between abuse history and STI risk, via increased inconsistent condom use. There was also a borderline association with having greater than 5 partners. Having a very close relationship with a caregiver and remaining in foster care beyond age 18 decreased STI risk. Moderation analysis revealed better model fit when coefficients were allowed to vary by gender versus a constrained model, but few significant differences in individual path coefficients were found between male and female-only models. Conclusions Interventions/policies that: 1) address externalizing trauma sequelae, 2) promote close, stable substitute caregiver relationships, and 3) extend care to age 21 years have the potential to decrease STI risk in this population. PMID:23859955

  10. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  11. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  12. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  13. Dynamic path planning for autonomous driving on various roads with avoidance of static and moving obstacles

    NASA Astrophysics Data System (ADS)

    Hu, Xuemin; Chen, Long; Tang, Bo; Cao, Dongpu; He, Haibo

    2018-02-01

    This paper presents a real-time dynamic path planning method for autonomous driving that avoids both static and moving obstacles. The proposed path planning method determines not only an optimal path, but also the appropriate acceleration and speed for a vehicle. In this method, we first construct a center line from a set of predefined waypoints, which are usually obtained from a lane-level map. A series of path candidates are generated by the arc length and offset to the center line in the s - ρ coordinate system. Then, all of these candidates are converted into Cartesian coordinates. The optimal path is selected considering the total cost of static safety, comfortability, and dynamic safety; meanwhile, the appropriate acceleration and speed for the optimal path are also identified. Various types of roads, including single-lane roads and multi-lane roads with static and moving obstacles, are designed to test the proposed method. The simulation results demonstrate the effectiveness of the proposed method, and indicate its wide practical application to autonomous driving.

  14. A path planning method used in fluid jet polishing eliminating lightweight mirror imprinting effect

    NASA Astrophysics Data System (ADS)

    Li, Wenzong; Fan, Bin; Shi, Chunyan; Wang, Jia; Zhuo, Bin

    2014-08-01

    With the development of space technology, the design of optical system tends to large aperture lightweight mirror with high dimension-thickness ratio. However, when the lightweight mirror PV value is less than λ/10 , the surface will show wavy imprinting effect obviously. Imprinting effect introduced by head-tool pressure has become a technological barrier in high-precision lightweight mirror manufacturing. Fluid jet polishing can exclude outside pressure. Presently, machining tracks often used are grating type path, screw type path and pseudo-random path. On the edge of imprinting error, the speed of adjacent path points changes too fast, which causes the machine hard to reflect quickly, brings about new path error, and increases the polishing time due to superfluous path. This paper presents a new planning path method to eliminate imprinting effect. Simulation results show that the path of the improved grating path can better eliminate imprinting effect compared to the general path.

  15. Multiple Scattering in Planetary Regoliths Using Incoherent Interactions

    NASA Astrophysics Data System (ADS)

    Muinonen, K.; Markkanen, J.; Vaisanen, T.; Penttilä, A.

    2017-12-01

    We consider scattering of light by a planetary regolith using novel numerical methods for discrete random media of particles. Understanding the scattering process is of key importance for spectroscopic, photometric, and polarimetric modeling of airless planetary objects, including radar studies. In our modeling, the size of the spherical random medium can range from microscopic to macroscopic sizes, whereas the particles are assumed to be of the order of the wavelength in size. We extend the radiative transfer and coherent backscattering method (RT-CB) to the case of dense packing of particles by adopting the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path. Furthermore, we replace the far-field interactions of the RT-CB method with rigorous interactions facilitated by the Superposition T-matrix method (STMM). This gives rise to a new RT-RT method, radiative transfer with reciprocal interactions. For microscopic random media, we then compare the new results to asymptotically exact results computed using the STMM, succeeding in the numerical validation of the new methods.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.

  16. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  17. Adaptive Bio-Inspired Wireless Network Routing for Planetary Surface Exploration

    NASA Technical Reports Server (NTRS)

    Alena, Richard I.; Lee, Charles

    2004-01-01

    Wireless mobile networks suffer connectivity loss when used in a terrain that has hills, and valleys when line of sight is interrupted or range is exceeded. To resolve this problem and achieve acceptable network performance, we have designed an adaptive, configurable, hybrid system to automatically route network packets along the best path between multiple geographically dispersed modules. This is very useful in planetary surface exploration, especially for ad-hoc mobile networks, where computational devices take an active part in creating a network infrastructure, and can actually be used to route data dynamically and even store data for later transmission between networks. Using inspiration from biological systems, this research proposes to use ant trail algorithms with multi-layered information maps (topographic maps, RF coverage maps) to determine the best route through ad-hoc network at real time. The determination of best route is a complex one, and requires research into the appropriate metrics, best method to identify the best path, optimizing traffic capacity, network performance, reliability, processing capabilities and cost. Real ants are capable of finding the shortest path from their nest to a food source without visual sensing through the use of pheromones. They are also able to adapt to changes in the environment using subtle clues. To use ant trail algorithms, we need to define the probability function. The artificial ant is, in this case, a software agent that moves from node to node on a network graph. The function to calculate the fitness (evaluate the better path) includes: length of the network edge, the coverage index, topology graph index, and pheromone trail left behind by other ant agents. Each agent modifies the environment in two different ways: 1) Local trail updating: As the ant moves between nodes it updates the amount of pheromone on the edge; and 2) Global trail updating: When all ants have completed a tour the ant that found the shortest route updates the edges in its path.

  18. Tree-average distances on certain phylogenetic networks have their weights uniquely determined.

    PubMed

    Willson, Stephen J

    2012-01-01

    A phylogenetic network N has vertices corresponding to species and arcs corresponding to direct genetic inheritance from the species at the tail to the species at the head. Measurements of DNA are often made on species in the leaf set, and one seeks to infer properties of the network, possibly including the graph itself. In the case of phylogenetic trees, distances between extant species are frequently used to infer the phylogenetic trees by methods such as neighbor-joining. This paper proposes a tree-average distance for networks more general than trees. The notion requires a weight on each arc measuring the genetic change along the arc. For each displayed tree the distance between two leaves is the sum of the weights along the path joining them. At a hybrid vertex, each character is inherited from one of its parents. We will assume that for each hybrid there is a probability that the inheritance of a character is from a specified parent. Assume that the inheritance events at different hybrids are independent. Then for each displayed tree there will be a probability that the inheritance of a given character follows the tree; this probability may be interpreted as the probability of the tree. The tree-average distance between the leaves is defined to be the expected value of their distance in the displayed trees. For a class of rooted networks that includes rooted trees, it is shown that the weights and the probabilities at each hybrid vertex can be calculated given the network and the tree-average distances between the leaves. Hence these weights and probabilities are uniquely determined. The hypotheses on the networks include that hybrid vertices have indegree exactly 2 and that vertices that are not leaves have a tree-child.

  19. Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.

    1985-01-01

    The bearingless rotorcraft offers reduced weight, less complexity and superior flying qualities. Almost all the current industrial structural dynamic programs of conventional rotors which consist of single load path rotor blades employ the transfer matrix method to determine natural vibration characteristics because this method is ideally suited for one dimensional chain like structures. This method is extended to multiple load path rotor blades without resorting to an equivalent single load path approximation. Unlike the conventional blades, it isk necessary to introduce the axial-degree-of-freedom into the solution process to account for the differential axial displacements in the different load paths. With the present extension, the current rotor dynamic programs can be modified with relative ease to account for the multiple load paths without resorting to the equivalent single load path modeling. The results obtained by the transfer matrix method are validated by comparing with the finite element solutions. A differential stiffness matrix due to blade rotation is derived to facilitate the finite element solutions.

  20. Concentrations and speciation of arsenic along a groundwater flow-path in the Upper Floridan aquifer, Florida, USA

    NASA Astrophysics Data System (ADS)

    Haque, S. E.; Johannesson, K. H.

    2006-05-01

    Arsenic (As) concentrations and speciation were determined in groundwaters along a flow-path in the Upper Floridan aquifer (UFA) to investigate the biogeochemical “evolution“ of As in this relatively pristine aquifer. Dissolved inorganic As species were separated in the field using anion-exchange chromatography and subsequently analyzed by inductively coupled plasma mass spectrometry. Total As concentrations are higher in the recharge area groundwaters compared to down-gradient portions of UFA. Redox conditions vary from relatively oxic to anoxic along the flow-path. Mobilization of As species in UFA groundwaters is influenced by ferric iron reduction and subsequent dissolution, sulfate reduction, and probable pyrite precipitation that are inferred from the data to occur along distinct regions of the flow-path. In general, the distribution of As species are consistent with equilibrium thermodynamics, such that arsenate dominates in more oxidizing waters near the recharge area, and arsenite predominates in the progressively reducing groundwaters beyond the recharge area.

  1. Implementation Of Fuzzy Approach To Improve Time Estimation [Case Study Of A Thermal Power Plant Is Considered

    NASA Astrophysics Data System (ADS)

    Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.

    2010-10-01

    Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.

  2. Exploring the Mechanisms of Differentiation, Dedifferentiation, Reprogramming and Transdifferentiation

    PubMed Central

    Xu, Li; Zhang, Kun; Wang, Jin

    2014-01-01

    We explored the underlying mechanisms of differentiation, dedifferentiation, reprogramming and transdifferentiation (cell type switchings) from landscape and flux perspectives. Lineage reprogramming is a new regenerative method to convert a matured cell into another cell including direct transdifferentiation without undergoing a pluripotent cell state and indirect transdifferentiation with an initial dedifferentiation-reversion (reprogramming) to a pluripotent cell state. Each cell type is quantified by a distinct valley on the potential landscape with higher probability. We investigated three driving forces for cell fate decision making: stochastic fluctuations, gene regulation and induction, which can lead to cell type switchings. We showed that under the driving forces the direct transdifferentiation process proceeds from a differentiated cell valley to another differentiated cell valley through either a distinct stable intermediate state or a certain series of unstable indeterminate states. The dedifferentiation process proceeds through a pluripotent cell state. Barrier height and the corresponding escape time from the valley on the landscape can be used to quantify the stability and efficiency of cell type switchings. We also uncovered the mechanisms of the underlying processes by quantifying the dominant biological paths of cell type switchings on the potential landscape. The dynamics of cell type switchings are determined by both landscape gradient and flux. The flux can lead to the deviations of the dominant biological paths for cell type switchings from the naively expected landscape gradient path. As a result, the corresponding dominant paths of cell type switchings are irreversible. We also classified the mechanisms of cell fate development from our landscape theory: super-critical pitchfork bifurcation, sub-critical pitchfork bifurcation, sub-critical pitchfork with two saddle-node bifurcation, and saddle-node bifurcation. Our model showed good agreements with the experiments. It provides a general framework to explore the mechanisms of differentiation, dedifferentiation, reprogramming and transdifferentiation. PMID:25133589

  3. Low Probability of Intercept Laser Range Finder

    DTIC Science & Technology

    2017-07-19

    time of arrival, and it may also include wavelength, pulse width, and pulse repetition frequency (PRF). Second photodetector 38 in conjunction with... conjunction with lens 32 and telescope 36 that can correct for turbulence along the free space path. [0024] In all embodiments, the time interval

  4. WE-H-BRA-08: A Monte Carlo Cell Nucleus Model for Assessing Cell Survival Probability Based On Particle Track Structure Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, B; Georgia Institute of Technology, Atlanta, GA; Wang, C

    Purpose: To correlate the damage produced by particles of different types and qualities to cell survival on the basis of nanodosimetric analysis and advanced DNA structures in the cell nucleus. Methods: A Monte Carlo code was developed to simulate subnuclear DNA chromatin fibers (CFs) of 30nm utilizing a mean-free-path approach common to radiation transport. The cell nucleus was modeled as a spherical region containing 6000 chromatin-dense domains (CDs) of 400nm diameter, with additional CFs modeled in a sparser interchromatin region. The Geant4-DNA code was utilized to produce a particle track database representing various particles at different energies and dose quantities.more » These tracks were used to stochastically position the DNA structures based on their mean free path to interaction with CFs. Excitation and ionization events intersecting CFs were analyzed using the DBSCAN clustering algorithm for assessment of the likelihood of producing DSBs. Simulated DSBs were then assessed based on their proximity to one another for a probability of inducing cell death. Results: Variations in energy deposition to chromatin fibers match expectations based on differences in particle track structure. The quality of damage to CFs based on different particle types indicate more severe damage by high-LET radiation than low-LET radiation of identical particles. In addition, the model indicates more severe damage by protons than of alpha particles of same LET, which is consistent with differences in their track structure. Cell survival curves have been produced showing the L-Q behavior of sparsely ionizing radiation. Conclusion: Initial results indicate the feasibility of producing cell survival curves based on the Monte Carlo cell nucleus method. Accurate correlation between simulated DNA damage to cell survival on the basis of nanodosimetric analysis can provide insight into the biological responses to various radiation types. Current efforts are directed at producing cell survival curves for high-LET radiation.« less

  5. General hyperconcentration of photonic polarization-time-bin hyperentanglement assisted by nitrogen-vacancy centers coupled to resonators

    NASA Astrophysics Data System (ADS)

    Du, Fang-Fang; Deng, Fu-Guo; Long, Gui-Lu

    2016-11-01

    Entanglement concentration protocol (ECP) is used to extract the maximally entangled states from less entangled pure states. Here we present a general hyperconcentration protocol for two-photon systems in partially hyperentangled Bell states that decay with the interrelation between the time-bin and the polarization degrees of freedom (DOFs), resorting to an input-output process with respect to diamond nitrogen-vacancy centers coupled to resonators. We show that the resource can be utilized sufficiently and the success probability is largely improved by iteration of the hyper-ECP process. Besides, our hyper-ECP can be directly extended to concentrate nonlocal partially hyperentangled N-photon Greenberger-Horne-Zeilinger states, and the success probability remains unchanged with the growth of the number of photons. Moreover, the time-bin entanglement is a useful DOF and it only requires one path for transmission, which means it not only economizes on a large amount of quantum resources but also relaxes from the path-length dispersion in long-distance quantum communication.

  6. Performance Analysis of an Inter-Relay Co-operation in FSO Communication System

    NASA Astrophysics Data System (ADS)

    Khanna, Himanshu; Aggarwal, Mona; Ahuja, Swaran

    2018-04-01

    In this work, we analyze the outage and error performance of a one-way inter-relay assisted free space optical link. The assumption of the absence of direct link between the source and destination node is being made for the analysis, and the feasibility of such system configuration is studied. We consider the influence of path loss, atmospheric turbulence and pointing error impairments, and investigate the effect of these parameters on the system performance. The turbulence-induced fading is modeled by independent but not necessarily identically distributed gamma-gamma fading statistics. The closed-form expressions for outage probability and probability of error are derived and illustrated by numerical plots. It is concluded that the absence of line of sight path between source and destination nodes does not lead to significant performance degradation. Moreover, for the system model under consideration, interconnected relaying provides better error performance than the non-interconnected relaying and dual-hop serial relaying techniques.

  7. General hyperconcentration of photonic polarization-time-bin hyperentanglement assisted by nitrogen-vacancy centers coupled to resonators

    PubMed Central

    Du, Fang-Fang; Deng, Fu-Guo; Long, Gui-Lu

    2016-01-01

    Entanglement concentration protocol (ECP) is used to extract the maximally entangled states from less entangled pure states. Here we present a general hyperconcentration protocol for two-photon systems in partially hyperentangled Bell states that decay with the interrelation between the time-bin and the polarization degrees of freedom (DOFs), resorting to an input-output process with respect to diamond nitrogen-vacancy centers coupled to resonators. We show that the resource can be utilized sufficiently and the success probability is largely improved by iteration of the hyper-ECP process. Besides, our hyper-ECP can be directly extended to concentrate nonlocal partially hyperentangled N-photon Greenberger-Horne-Zeilinger states, and the success probability remains unchanged with the growth of the number of photons. Moreover, the time-bin entanglement is a useful DOF and it only requires one path for transmission, which means it not only economizes on a large amount of quantum resources but also relaxes from the path-length dispersion in long-distance quantum communication. PMID:27804973

  8. Formulation of D-brane Dynamics

    NASA Astrophysics Data System (ADS)

    Evans, Thomas

    2012-03-01

    It is the purpose of this paper (within the context of STS rules & guidelines ``research report'') to formulate a statistical-mechanical form of D-brane dynamics. We consider first the path integral formulation of quantum mechanics, and extend this to a path-integral formulation of D-brane mechanics, summing over all the possible path integral sectors of R-R, NS charged states. We then investigate this generalization utilizing a path-integral formulation summing over all the possible path integral sectors of R-R charged states, calculated from the mean probability tree-level amplitude of type I, IIA, and IIB strings, serving as a generalization of all strings described by D-branes. We utilize this generalization to study black holes in regimes where the initial D-brane system is legitimate, and further this generalization to look at information loss near regions of nonlocality on a non-ordinary event horizon. We see here that in these specific regimes, we can calculate a path integral formulation, as describing D0-brane mechanics, tracing the dissipation of entropy throughout the event horizon. This is used to study the information paradox, and to propose a resolution between the phenomena and the correct and expected quantum mechanical description. This is done as our path integral throughout entropy entering the event horizon effectively and correctly encodes the initial state in subtle correlations in the Hawking radiation.

  9. Multiple scattering in planetary regoliths using first-order incoherent interactions

    NASA Astrophysics Data System (ADS)

    Muinonen, Karri; Markkanen, Johannes; Väisänen, Timo; Penttilä, Antti

    2017-10-01

    We consider scattering of light by a planetary regolith modeled using discrete random media of spherical particles. The size of the random medium can range from microscopic sizes of a few wavelengths to macroscopic sizes approaching infinity. The size of the particles is assumed to be of the order of the wavelength. We extend the numerical Monte Carlo method of radiative transfer and coherent backscattering (RT-CB) to the case of dense packing of particles. We adopt the ensemble-averaged first-order incoherent extinction, scattering, and absorption characteristics of a volume element of particles as input for the RT-CB. The volume element must be larger than the wavelength but smaller than the mean free path length of incoherent extinction. In the radiative transfer part, at each absorption and scattering process, we account for absorption with the help of the single-scattering albedo and peel off the Stokes parameters of radiation emerging from the medium in predefined scattering angles. We then generate a new scattering direction using the joint probability density for the local polar and azimuthal scattering angles. In the coherent backscattering part, we utilize amplitude scattering matrices along the radiative-transfer path and the reciprocal path, and utilize the reciprocity of electromagnetic waves to verify the computation. We illustrate the incoherent volume-element scattering characteristics and compare the dense-medium RT-CB to asymptotically exact results computed using the Superposition T-matrix method (STMM). We show that the dense-medium RT-CB compares favorably to the STMM results for the current cases of sparse and dense discrete random media studied. The novel method can be applied in modeling light scattering by the surfaces of asteroids and other airless solar system objects, including UV-Vis-NIR spectroscopy, photometry, polarimetry, and radar scattering problems.Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.

  10. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    PubMed

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  11. Dynamic path planning for mobile robot based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In the contemporary, robots are used in many fields, such as cleaning, medical treatment, space exploration, disaster relief and so on. The dynamic path planning of robot without collision is becoming more and more the focus of people's attention. A new method of path planning is proposed in this paper. Firstly, the motion space model of the robot is established by using the MAKLINK graph method. Then the A* algorithm is used to get the shortest path from the start point to the end point. Secondly, this paper proposes an effective method to detect and avoid obstacles. When an obstacle is detected on the shortest path, the robot will choose the nearest safety point to move. Moreover, calculate the next point which is nearest to the target. Finally, the particle swarm optimization algorithm is used to optimize the path. The experimental results can prove that the proposed method is more effective.

  12. Quantitative comparison of alternative methods for coarse-graining biological networks

    PubMed Central

    Bowman, Gregory R.; Meng, Luming; Huang, Xuhui

    2013-01-01

    Markov models and master equations are a powerful means of modeling dynamic processes like protein conformational changes. However, these models are often difficult to understand because of the enormous number of components and connections between them. Therefore, a variety of methods have been developed to facilitate understanding by coarse-graining these complex models. Here, we employ Bayesian model comparison to determine which of these coarse-graining methods provides the models that are most faithful to the original set of states. We find that the Bayesian agglomerative clustering engine and the hierarchical Nyström expansion graph (HNEG) typically provide the best performance. Surprisingly, the original Perron cluster cluster analysis (PCCA) method often provides the next best results, outperforming the newer PCCA+ method and the most probable paths algorithm. We also show that the differences between the models are qualitatively significant, rather than being minor shifts in the boundaries between states. The performance of the methods correlates well with the entropy of the resulting coarse-grainings, suggesting that finding states with more similar populations (i.e., avoiding low population states that may just be noise) gives better results. PMID:24089717

  13. Assessment of the transport routes of oversized and excessive loads in relation to the passage through roundabout

    NASA Astrophysics Data System (ADS)

    Petru, Jan; Dolezel, Jiri; Krivda, Vladislav

    2017-09-01

    In the past the excessive and oversized loads were realized on selected routes on roads that were adapted to ensure smooth passage of transport. Over the years, keeping the passages was abandoned and currently there are no earmarked routes which would be adapted for such type of transportation. The routes of excessive and oversized loads are currently planned to ensure passage of the vehicle through the critical points on the roads. Critical points are level and fly-over crossings of roads, bridges, toll gates, traffic signs and electrical and other lines. The article deals with the probability assessment of selected critical points of the route of the excessive load on the roads of 1st class, in relation to ensuring the passage through the roundabout. The bases for assessing the passage of the vehicle with excessive load through a roundabout are long-term results of video analyses of monitoring the movement of transports on similar intersections and determination of the theoretical probability model of vehicle movement at selected junctions. On the basis of a virtual simulation of the vehicle movement at crossroads and using MonteCarlo simulation method vehicles’ paths are analysed and the probability of exit of the vehicle outside the crossroad in given junctions is quantified.

  14. The potential role of magmatic gases in the genesis of Illinois- Kentucky fluorspar deposits: implications from chemical reaction path modeling

    USGS Publications Warehouse

    Plumlee, G.S.; Goldhaber, M.B.; Rowan, E.L.

    1995-01-01

    Presents results of reaction path calculations using the chemical speciation and reaction path program SOLVEQ and CHILLER to model possible fluorite deposition mechanisms in the Illinois-Kentucky fluorspar district. The results indicate that the fluids responsible for Illinois-Kentucky fluorspar mineralization were most likely quite acidic (pH < 4) and rich in fluorine in order to produce the fluorite-rich, dolomite-poor mineral assemblages and extensive dissolution of host limestones. A possible source for the acid and fluorine may have been HF-rich gases which were expelled from alkalic magmas and then incorporated by migrating basinal brines. An analysis of the geologic setting of other fluorite deposits and districts worldwide suggests that involvement of magmatic gases is probable for many of these districts as well. -from Authors

  15. Evaluation of Methods to Estimate the Surface Downwelling Longwave Flux during Arctic Winter

    NASA Technical Reports Server (NTRS)

    Chiacchio, Marc; Francis, Jennifer; Stackhouse, Paul, Jr.

    2002-01-01

    Surface longwave radiation fluxes dominate the energy budget of nighttime polar regions, yet little is known about the relative accuracy of existing satellite-based techniques to estimate this parameter. We compare eight methods to estimate the downwelling longwave radiation flux and to validate their performance with measurements from two field programs in thc Arctic: the Coordinated Eastern Arctic Experiment (CEAREX ) conducted in the Barents Sea during the autumn and winter of 1988, and the Lead Experiment performed in the Beaufort Sea in the spring of 1992. Five of the eight methods were developed for satellite-derived quantities, and three are simple parameterizations based on surface observations. All of the algorithms require information about cloud fraction, which is provided from the NASA-NOAA Television and Infrared Observation Satellite (TIROS) Operational Vertical Sounder (TOVS) polar pathfinder dataset (Path-P): some techniques ingest temperature and moisture profiles (also from Path-P): one-half of the methods assume that clouds are opaque and have a constant geometric thickness of 50 hPa, and three include no thickness information whatsoever. With a somewhat limited validation dataset, the following primary conclusions result: (1) all methods exhibit approximately the same correlations with measurements and rms differences, but the biases range from -34 W sq m (16% of the mean) to nearly 0; (2) the error analysis described here indicates that the assumption of a 50-hPa cloud thickness is too thin by a factor of 2 on average in polar nighttime conditions; (3) cloud-overlap techniques. which effectively increase mean cloud thickness, significantly improve the results; (4) simple Arctic-specific parameterizations performed poorly, probably because they were developed with surface-observed cloud fractions; and (5) the single algorithm that includes an estimate of cloud thickness exhibits the smallest differences from observations.

  16. Simulating Mission Command for Planning and Analysis

    DTIC Science & Technology

    2015-06-01

    mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and

  17. Inference of strata separation and gas emission paths in longwall overburden using continuous wavelet transform of well logs and geostatistical simulation

    NASA Astrophysics Data System (ADS)

    Karacan, C. Özgen; Olea, Ricardo A.

    2014-06-01

    Prediction of potential methane emission pathways from various sources into active mine workings or sealed gobs from longwall overburden is important for controlling methane and for improving mining safety. The aim of this paper is to infer strata separation intervals and thus gas emission pathways from standard well log data. The proposed technique was applied to well logs acquired through the Mary Lee/Blue Creek coal seam of the Upper Pottsville Formation in the Black Warrior Basin, Alabama, using well logs from a series of boreholes aligned along a nearly linear profile. For this purpose, continuous wavelet transform (CWT) of digitized gamma well logs was performed by using Mexican hat and Morlet, as the mother wavelets, to identify potential discontinuities in the signal. Pointwise Hölder exponents (PHE) of gamma logs were also computed using the generalized quadratic variations (GQV) method to identify the location and strength of singularities of well log signals as a complementary analysis. PHEs and wavelet coefficients were analyzed to find the locations of singularities along the logs. Using the well logs in this study, locations of predicted singularities were used as indicators in single normal equation simulation (SNESIM) to generate equi-probable realizations of potential strata separation intervals. Horizontal and vertical variograms of realizations were then analyzed and compared with those of indicator data and training image (TI) data using the Kruskal-Wallis test. A sum of squared differences was employed to select the most probable realization representing the locations of potential strata separations and methane flow paths. Results indicated that singularities located in well log signals reliably correlated with strata transitions or discontinuities within the strata. Geostatistical simulation of these discontinuities provided information about the location and extents of the continuous channels that may form during mining. If there is a gas source within their zone of influence, paths may develop and allow methane movement towards sealed or active gobs under pressure differentials. Knowledge gained from this research will better prepare mine operations for potential methane inflows, thus improving mine safety.

  18. Tree attenuation at 20 GHz: Foliage effects

    NASA Technical Reports Server (NTRS)

    Vogel, Wolfhard J.; Goldhirsh, Julius

    1993-01-01

    Static tree attenuation measurements at 20 GHz (K-Band) on a 30 deg slant path through a mature Pecan tree with and without leaves showed median fades exceeding approximately 23 dB and 7 dB, respectively. The corresponding 1% probability fades were 43 dB and 25 dB. Previous 1.6 GHz (L-Band) measurements for the bare tree case showed fades larger than those at K-Band by 3.4 dB for the median and smaller by approximately 7 dB at the 1% probability. While the presence of foliage had only a small effect on fading at L-Band (approximately 1 dB additional for the median to 1% probability range), the attenuation increase was significant at K-Band, where it increased by about 17 dB over the same probability range.

  19. Tree attenuation at 20 GHz: Foliage effects

    NASA Astrophysics Data System (ADS)

    Vogel, Wolfhard J.; Goldhirsh, Julius

    1993-08-01

    Static tree attenuation measurements at 20 GHz (K-Band) on a 30 deg slant path through a mature Pecan tree with and without leaves showed median fades exceeding approximately 23 dB and 7 dB, respectively. The corresponding 1% probability fades were 43 dB and 25 dB. Previous 1.6 GHz (L-Band) measurements for the bare tree case showed fades larger than those at K-Band by 3.4 dB for the median and smaller by approximately 7 dB at the 1% probability. While the presence of foliage had only a small effect on fading at L-Band (approximately 1 dB additional for the median to 1% probability range), the attenuation increase was significant at K-Band, where it increased by about 17 dB over the same probability range.

  20. The Futurist Perspective: Implications for Community College Planning.

    ERIC Educational Resources Information Center

    Nicholson, R. Stephen; Keyser, John S.

    Community college managers would probably acknowledge the importance of planning, but might not accept the need to adopt a futuristic perspective on educational planning. One of the characteristics of futurists is a belief that the future is a created reality, not a consequence of random events. Futurists conceive possible paths, examine…

  1. Measurement of Attenuation with Airborne and Ground-Based Radar in Convective Storms Over Land Its Microphysical Implications

    NASA Technical Reports Server (NTRS)

    Tian, Lin; Heymsfield, G. M.; Srivastava, R. C.; O'C.Starr, D. (Technical Monitor)

    2001-01-01

    Observations by the airborne X-band Doppler radar (EDOP) and the NCAR S-band polarimetric (S-Pol) radar from two field experiments are used to evaluate the surface reference technique (SRT) for measuring the path integrated attenuation (PIA) and to study attenuation in deep convective storms. The EDOP, flying at an altitude of 20 km, uses a nadir beam and a forward pointing beam. It is found that over land, the surface scattering cross-section is highly variable at nadir incidence but relatively stable at forward incidence. It is concluded that measurement by the forward beam provides a viable technique for measuring PIA using the SRT. Vertical profiles of peak attenuation coefficient are derived in two deep convective storms by the dual-wavelength method. Using the measured Doppler velocity, the reflectivities at the two wavelengths, the differential reflectivity and the estimated attenuation coefficients, it is shown that: supercooled drops and (dry) ice particles probably co-existed above the melting level in regions of updraft, that water-coated partially melted ice particles probably contributed to high attenuation below the melting level.

  2. Simple ray tracing of Galileo-observed hectometric attenuation features

    NASA Astrophysics Data System (ADS)

    Higgins, Charles A.; Thieman, James R.; Fung, Shing F.; Green, James L.; Candey, Robert M.

    Observations of persistent structural features within Jovian hectometric (HOM) radio emission have been made with the Galileo spacecraft. Two well-defined sinusoidal-shaped ``band'' features of reduced emission intensity and occurrence probability exist at all Jovian longitudes and nearly cover the entire spectrum of HOM radio emission from ~500 kHz to 3000 kHz. These two sinusoidal lanes have a bandwidth of 200-400 kHz and are 180° out of phase with one another, suggesting that they are a result of HOM radio emission propagation processes from opposite hemispheres. These features become more apparent when presented as intensity or occurrence probability spectrograms added together over multiple Jovian rotations. Enhancements in the HOM intensity and occurrence are seen along the edges of one of the observed sinusoidal lane features which may indicate caustic surfaces due to refraction along the propagation path. We present some simple ray tracing analyses to show that refraction from density enhancements in the Io torus flux tube may explain some of the observations. Using this simple method, we approximate the density enhancements in the Io flux tube to be 100 cm-3.

  3. Huygens-Fresnel principle: Analyzing consistency at the photon level

    NASA Astrophysics Data System (ADS)

    Santos, Elkin A.; Castro, Ferney; Torres, Rafael

    2018-04-01

    Typically the use of the Rayleigh-Sommerfeld diffraction formula as a photon propagator is widely accepted due to the abundant experimental evidence that suggests that it works. However, a direct link between the propagation of the electromagnetic field in classical optics and the propagation of photons where the square of the probability amplitude describes the transverse probability of the photon detection is still an issue to be clarified. We develop a mathematical formulation for the photon propagation using the formalism of electromagnetic field quantization and the path-integral method, whose main feature is its similarity with a fractional Fourier transform (FRFT). Here we show that because of the close relation existing between the FRFT and the Fresnel diffraction integral, this propagator can be written as a Fresnel diffraction, which brings forward a discussion of the fundamental character of it at the photon level compared to the Huygens-Fresnel principle. Finally, we carry out an experiment of photon counting by a rectangular slit supporting the result that the diffraction phenomenon in the Fresnel approximation behaves as the actual classical limit.

  4. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  5. Time signal distribution in communication networks based on synchronous digital hierarchy

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1993-01-01

    A new method that uses round-trip paths to accurately measure transmission delay for time synchronization is proposed. The performance of the method in Synchronous Digital Hierarchy networks is discussed. The feature of this method is that it separately measures the initial round trip path delay and the variations in round-trip path delay. The delay generated in SDH equipment is determined by measuring the initial round-trip path delay. In an experiment with actual SDH equipment, the error of initial delay measurement was suppressed to 30ns.

  6. Layered data association using graph-theoretic formulation with applications to tennis ball tracking in monocular sequences.

    PubMed

    Yan, Fei; Christmas, William; Kittler, Josef

    2008-10-01

    In this paper, we propose a multilayered data association scheme with graph-theoretic formulation for tracking multiple objects that undergo switching dynamics in clutter. The proposed scheme takes as input object candidates detected in each frame. At the object candidate level, "tracklets'' are "grown'' from sets of candidates that have high probabilities of containing only true positives. At the tracklet level, a directed and weighted graph is constructed, where each node is a tracklet, and the edge weight between two nodes is defined according to the "compatibility'' of the two tracklets. The association problem is then formulated as an all-pairs shortest path (APSP) problem in this graph. Finally, at the path level, by analyzing the APSPs, all object trajectories are identified, and track initiation and track termination are automatically dealt with. By exploiting a special topological property of the graph, we have also developed a more efficient APSP algorithm than the general-purpose ones. The proposed data association scheme is applied to tennis sequences to track tennis balls. Experiments show that it works well on sequences where other data association methods perform poorly or fail completely.

  7. Geometric and topological characterization of porous media: insights from eigenvector centrality

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, J.; Negre, C.

    2017-12-01

    Solving flow and transport through complex geometries such as porous media involves an extreme computational cost. Simplifications such as pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models have the ability to preserve the connectivity of the medium. However, they have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Network theory approaches, where the complex network is conceptualized like a graph, can help to simplify and better understand fluid dynamics and transport in porous media. To address this issue, we propose a method based on eigenvector centrality. It has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction which allows considering the flow and transport anisotropy in porous media. The model predictions are compared with millifluidic transport experiments, showing that this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. Entropy computed from the eigenvector centrality probability distribution is proposed as an indicator of the "mixing capacity" of the system.

  8. Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.

    PubMed

    Steel, Ruth Irene

    2015-01-01

    Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.

  9. APPLYING OPEN-PATH OPTICAL SPECTROSCOPY TO HEAVY-DUTY DIESEL EMISSIONS

    EPA Science Inventory

    Non-dispersive infrared absorption has been used to measure gaseous emissions for both stationary and mobile sources. Fourier transform infrared spectroscopy has been used for stationary sources as both extractive and open-path methods. We have applied the open-path method for bo...

  10. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  11. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  12. Adaptive hybrid simulations for multiscale stochastic reaction networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such amore » partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.« less

  13. Adaptive hybrid simulations for multiscale stochastic reaction networks.

    PubMed

    Hepp, Benjamin; Gupta, Ankit; Khammash, Mustafa

    2015-01-21

    The probability distribution describing the state of a Stochastic Reaction Network (SRN) evolves according to the Chemical Master Equation (CME). It is common to estimate its solution using Monte Carlo methods such as the Stochastic Simulation Algorithm (SSA). In many cases, these simulations can take an impractical amount of computational time. Therefore, many methods have been developed that approximate sample paths of the underlying stochastic process and estimate the solution of the CME. A prominent class of these methods include hybrid methods that partition the set of species and the set of reactions into discrete and continuous subsets. Such a partition separates the dynamics into a discrete and a continuous part. Simulating such a stochastic process can be computationally much easier than simulating the exact discrete stochastic process with SSA. Moreover, the quasi-stationary assumption to approximate the dynamics of fast subnetworks can be applied for certain classes of networks. However, as the dynamics of a SRN evolves, these partitions may have to be adapted during the simulation. We develop a hybrid method that approximates the solution of a CME by automatically partitioning the reactions and species sets into discrete and continuous components and applying the quasi-stationary assumption on identifiable fast subnetworks. Our method does not require any user intervention and it adapts to exploit the changing timescale separation between reactions and/or changing magnitudes of copy-numbers of constituent species. We demonstrate the efficiency of the proposed method by considering examples from systems biology and showing that very good approximations to the exact probability distributions can be achieved in significantly less computational time. This is especially the case for systems with oscillatory dynamics, where the system dynamics change considerably throughout the time-period of interest.

  14. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map.

    PubMed

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S

    2008-04-11

    A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker-Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes.

  15. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  16. Routing and spectrum assignment based on ant colony optimization of minimum consecutiveness loss in elastic optical networks

    NASA Astrophysics Data System (ADS)

    Wang, Fu; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Tian, Qinghua; Zhang, Qi; Rao, Lan; Tian, Feng; Luo, Biao; Liu, Yingjun; Tang, Bao

    2016-10-01

    Elastic Optical Networks are considered to be a promising technology for future high-speed network. In this paper, we propose a RSA algorithm based on the ant colony optimization of minimum consecutiveness loss (ACO-MCL). Based on the effect of the spectrum consecutiveness loss on the pheromone in the ant colony optimization, the path and spectrum of the minimal impact on the network are selected for the service request. When an ant arrives at the destination node from the source node along a path, we assume that this path is selected for the request. We calculate the consecutiveness loss of candidate-neighbor link pairs along this path after the routing and spectrum assignment. Then, the networks update the pheromone according to the value of the consecutiveness loss. We save the path with the smallest value. After multiple iterations of the ant colony optimization, the final selection of the path is assigned for the request. The algorithms are simulated in different networks. The results show that ACO-MCL algorithm performs better in blocking probability and spectrum efficiency than other algorithms. Moreover, the ACO-MCL algorithm can effectively decrease spectrum fragmentation and enhance available spectrum consecutiveness. Compared with other algorithms, the ACO-MCL algorithm can reduce the blocking rate by at least 5.9% in heavy load.

  17. Generalized quantum interference of correlated photon pairs.

    PubMed

    Kim, Heonoh; Lee, Sang Min; Moon, Han Seb

    2015-05-07

    Superposition and indistinguishablility between probability amplitudes have played an essential role in observing quantum interference effects of correlated photons. The Hong-Ou-Mandel interference and interferences of the path-entangled photon number state are of special interest in the field of quantum information technologies. However, a fully generalized two-photon quantum interferometric scheme accounting for the Hong-Ou-Mandel scheme and path-entangled photon number states has not yet been proposed. Here we report the experimental demonstrations of the generalized two-photon interferometry with both the interferometric properties of the Hong-Ou-Mandel effect and the fully unfolded version of the path-entangled photon number state using photon-pair sources, which are independently generated by spontaneous parametric down-conversion. Our experimental scheme explains two-photon interference fringes revealing single- and two-photon coherence properties in a single interferometer setup. Using the proposed interferometric measurement, it is possible to directly estimate the joint spectral intensity of a photon pair source.

  18. Social network analysis using k-Path centrality method

    NASA Astrophysics Data System (ADS)

    Taniarza, Natya; Adiwijaya; Maharani, Warih

    2018-03-01

    k-Path centrality is deemed as one of the effective methods to be applied in centrality measurement in which the influential node is estimated as the node that is being passed by information path frequently. Regarding this, k-Path centrality has been employed in the analysis of this paper specifically by adapting random-algorithm approach in order to: (1) determine the influential user’s ranking in a social media Twitter; and (2) ascertain the influence of parameter α in the numeration of k-Path centrality. According to the analysis, the findings showed that the method of k-Path centrality with random-algorithm approach can be used to determine user’s ranking which influences in the dissemination of information in Twitter. Furthermore, the findings also showed that parameter α influenced the duration and the ranking results: the less the α value, the longer the duration, yet the ranking results were more stable.

  19. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL

    2011-11-08

    Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  20. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  1. Black Swans and the Effectiveness of Remediating Groundwater Contamination

    NASA Astrophysics Data System (ADS)

    Siegel, D. I.; Otz, M. H.; Otz, I.

    2013-12-01

    Black swans, outliers, dominate science far more than do predictable outcomes. Predictable success constitutes the Black Swan in groundwater remediation. Even the National Research Council concluded that remediating groundwater to drinking water standards has failed in typically complex hydrogeologic settings where heterogeneities and preferential flow paths deflect flow paths obliquely to hydraulic gradients. Natural systems, be they biological or physical, build upon a combination of large-scale regularity coupled to chaos at smaller scales. We show through a review of over 25 case studies that groundwater remediation efforts are best served by coupling parsimonious site characterization to natural and induced geochemical tracer tests to at least know where contamination advects with groundwater in the subsurface. In the majority of our case studies, actual flow paths diverge tens of degrees from anticipated flow paths because of unrecognized heterogeneities in the horizontal direction of transport, let alone the vertical direction. Consequently, regulatory agencies would better serve both the public and the environment by recognizing that long-term groundwater cleanup probably is futile in most hydrogeologic settings except to relaxed standards similar to brownfielding. A Black Swan

  2. Explore Stochastic Instabilities of Periodic Points by Transition Path Theory

    NASA Astrophysics Data System (ADS)

    Cao, Yu; Lin, Ling; Zhou, Xiang

    2016-06-01

    We consider the noise-induced transitions from a linearly stable periodic orbit consisting of T periodic points in randomly perturbed discrete logistic map. Traditional large deviation theory and asymptotic analysis at small noise limit cannot distinguish the quantitative difference in noise-induced stochastic instabilities among the T periodic points. To attack this problem, we generalize the transition path theory to the discrete-time continuous-space stochastic process. In our first criterion to quantify the relative instability among T periodic points, we use the distribution of the last passage location related to the transitions from the whole periodic orbit to a prescribed disjoint set. This distribution is related to individual contributions to the transition rate from each periodic points. The second criterion is based on the competency of the transition paths associated with each periodic point. Both criteria utilize the reactive probability current in the transition path theory. Our numerical results for the logistic map reveal the transition mechanism of escaping from the stable periodic orbit and identify which periodic point is more prone to lose stability so as to make successful transitions under random perturbations.

  3. Selection of test paths for solder joint intermittent connection faults under DC stimulus

    NASA Astrophysics Data System (ADS)

    Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen

    2018-06-01

    The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.

  4. Shortest multiple disconnected path for the analysis of entanglements in two- and three-dimensional polymeric systems

    NASA Astrophysics Data System (ADS)

    Kröger, Martin

    2005-06-01

    We present an algorithm which returns a shortest path and related number of entanglements for a given configuration of a polymeric system in 2 or 3 dimensions. Rubinstein and Helfand, and later Everaers et al. introduced a concept to extract primitive paths for dense polymeric melts made of linear chains (a multiple disconnected multibead 'path'), where each primitive path is defined as a path connecting the (space-fixed) ends of a polymer under the constraint of non-interpenetration (excluded volume) between primitive paths of different chains, such that the multiple disconnected path fulfills a minimization criterion. The present algorithm uses geometrical operations and provides a—model independent—efficient approximate solution to this challenging problem. Primitive paths are treated as 'infinitely' thin (we further allow for finite thickness to model excluded volume), and tensionless lines rather than multibead chains, excluded volume is taken into account without a force law. The present implementation allows to construct a shortest multiple disconnected path (SP) for 2D systems (polymeric chain within spherical obstacles) and an optimal SP for 3D systems (collection of polymeric chains). The number of entanglements is then simply obtained from the SP as either the number of interior kinks, or from the average length of a line segment. Further, information about structure and potentially also the dynamics of entanglements is immediately available from the SP. We apply the method to study the 'concentration' dependence of the degree of entanglement in phantom chain systems. Program summaryTitle of program:Z Catalogue number:ADVG Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVG Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed and others on which it has been tested: Silicon Graphics (Irix), Sun (Solaris), PC (Linux) Operating systems or monitors under which the program has been tested: UNIX, Linux Program language used: USANSI Fortran 77 and Fortran 90 Memory required to execute with typical data: 1 MByte No. of lines in distributed program, including test data, etc.: 10 660 No. of bytes in distributed program, including test data, etc.: 119 551 Distribution formet:tar.gz Nature of physical problem: The problem is to obtain primitive paths substantiating a shortest multiple disconnected path (SP) for a given polymer configuration (chains of particles, with or without additional single particles as obstacles for the 2D case). Primitive paths are here defined as in [M. Rubinstein, E. Helfand, J. Chem. Phys. 82 (1985) 2477; R. Everaers, S.K. Sukumaran, G.S. Grest, C. Svaneborg, A. Sivasubramanian, K. Kremer, Science 303 (2004) 823] as the shortest line (path) respecting 'topological' constraints (from neighboring polymers or point obstacles) between ends of polymers. There is a unique solution for the 2D case. For the 3D case it is unique if we construct a primitive path of a single chain embedded within fixed line obstacles [J.S.B. Mitchell, Geometric shortest paths and network optimization, in: J.-R. Sack, J. Urrutia (Eds.), Handbook of Computational Geometry, Elsevier, Amsterdam, 2000, pp. 633-701]. For a large 3D configuration made of several chains, short is meant to be the Euclidean shortest multiple disconnected path (SP) where primitive paths are constructed for all chains simultaneously. While the latter problem, in general, does not possess a unique solution, the algorithm must return a locally optimal solution, robust against minor displacements of the disconnected path and chain re-labeling. The problem is solved if the number of kinks (or entanglements Z), explicitly deduced from the SP, is quite insensitive to the exact conformation of the SP which allows to estimate Z with a small error. Efficient method of solution: Primitive paths are constructed from the given polymer configuration (a non-shortest multiple disconnected path, including obstacles, if present) by first replacing each polymer contour by a line with a number of 'kinks' (beads, nodes) and 'segments' (edges). To obtain primitive paths, defined to be uncrossable by any other objects (neighboring primitive paths, line or point obstacles), the algorithm minimizes the length of all primitive paths consecutively, until a final minimum Euclidean length of the SP is reached. Fast geometric operations rather than dynamical methods are used to minimize the contour lengths of the primitive paths. Neighbor lists are used to keep track of potentially intersecting segments of other chains. Periodic boundary conditions are employed. A finite small line thickness is used in order to make sure that entanglements are not 'lost' due to finite precision of representation of numbers. Restrictions on the complexity of the problem: For a single chain embedded within fixed line or point obstacles, the algorithm returns the exact SP. For more complex problems, the algorithm returns a locally optimal SP. Except for exotic, probably rare, configurations it turns out that different locally optimal SPs possess quite an identical number of nodes. In general, the problem constructing the SP is known to be NP-hard [J.S.B. Mitchell, Geometric shortest paths and network optimization, in: J.-R. Sack, J. Urrutia (Eds.), Handbook of Computational Geometry, Elsevier, Amsterdam, 2000, pp. 633-701], and we offer a solution which should suffice to analyze physical problems, and gives an estimate about the precision and uniqueness of the result (from a standard deviation by varying the parameter: cyclicswitch). The program is NOT restricted to handle systems for which segment lengths of the SP exceed half the box size. Typical running time: Typical running times are approximately two orders of magnitude shorter compared with the ones needed for a corresponding molecular dynamics approach, and scale mostly linearly with system size. We provide a benchmark table.

  5. Self-consistent collective coordinate for reaction path and inertial mass

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Nakatsukasa, Takashi

    2016-11-01

    We propose a numerical method to determine the optimal collective reaction path for a nucleus-nucleus collision, based on the adiabatic self-consistent collective coordinate (ASCC) method. We use an iterative method, combining the imaginary-time evolution and the finite amplitude method, for the solution of the ASCC coupled equations. It is applied to the simplest case, α -α scattering. We determine the collective path, the potential, and the inertial mass. The results are compared with other methods, such as the constrained Hartree-Fock method, Inglis's cranking formula, and the adiabatic time-dependent Hartree-Fock (ATDHF) method.

  6. Validation of GOSAT XCO2 and XCH4 retrieved by PPDF-S method and evaluation of sensitivity of aerosols to gas concentrations

    NASA Astrophysics Data System (ADS)

    Iwasaki, C.; Imasu, R.; Bril, A.; Yokota, T.; Yoshida, Y.; Morino, I.; Oshchepkov, S.; Rokotyan, N.; Zakharov, V.; Gribanov, K.

    2017-12-01

    Photon path length probability density function-Simultaneous (PPDF-S) method is one of effective algorithms for retrieving column-averaged concentrations of carbon dioxide (XCO2) and methane (XCH4) from Greenhouse gases Observing SATellite (GOSAT) spectra in Short Wavelength InfraRed (SWIR) [Oshchepkov et al., 2013]. In this study, we validated XCO2 and XCH4 retrieved by the PPDF-S method through comparison with the Total Carbon Column Observing Network (TCCON) data [Wunch et al., 2011] from 26 sites including additional site of the Ural Atmospheric Station at Kourovka [57.038°N and 59.545°E], Russia. Validation results using TCCON data show that bias and its standard deviation of PPDF-S data are respectively 0.48 and 2.10 ppm for XCO2, and -0.73 and 15.77 ppb for XCH4. The results for XCO2 are almost identical with those of Iwasaki et al. [2017] for which the validation data were limited at selected 11 sites. However, the bias of XCH4 shows opposite sign against that of Iwasaki et al. [2017]. Furthermore, the data at Kourouvka showed different features particularly for XCH4. In order to investigate the causes for the differences, we have carried out simulation studies mainly focusing on the effects of aerosols which modify the light path length of solar radiation [O'Brien and Rayner, 2002; Aben et al., 2007; Oshchepkov et al., 2008]. Based on the simulation studies using multiple radiation transfer code based on Discrete Ordinate Method (DOM), Polarization System for Transfer of Atmospheric Radiation3 (Pstar3) [Ota et al., 2010], sensitivity of aerosols to gas concentrations was examined.

  7. Characterization of the Space Shuttle Ascent Debris using CFD Methods

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Rogers, Stuart E.

    2005-01-01

    After video analysis of space shuttle flight STS-107's ascent showed that an object shed from the bipod-ramp region impacted the left wing, a transport analysis was initiated to determine a credible flight path and impact velocity for the piece of debris. This debris transport analysis was performed both during orbit, and after the subsequent re-entry accident. The analysis provided an accurate prediction of the velocity a large piece of foam bipod ramp would have as it impacted the wing leading edge. This prediction was corroborated by video analysis and fully-coupled CFD/six degree of freedom (DOF) simulations. While the prediction of impact velocity was accurate enough to predict critical damage in this case, one of the recommendations of the Columbia Accident Investigation Board (CAIB) for return-to-flight (RTF) was to analyze the complete debris environment experienced by the shuttle stack on ascent. This includes categorizing all possible debris sources, their probable geometric and aerodynamic characteristics, and their potential for damage. This paper is chiefly concerned with predicting the aerodynamic characteristics of a variety of potential debris sources (insulating foam and cork, nose-cone ablator, ice, ...) for the shuttle ascent configuration using CFD methods. These aerodynamic characteristics are used in the debris transport analysis to predict flight path, impact velocity and angle, and provide statistical variation to perform risk analyses where appropriate. The debris aerodynamic characteristics are difficult to determine using traditional methods, such as static or dynamic test data, due to the scaling requirements of simulating a typical debris event. The use of CFD methods has been a critical element for building confidence in the accuracy of the debris transport code by bridging the gap between existing aerodynamic data and the dynamics of full-scale, in-flight events.

  8. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  9. Environmental factors and flow paths related to Escherichia coli concentrations at two beaches on Lake St. Clair, Michigan, 2002–2005

    USGS Publications Warehouse

    Holtschlag, David J.; Shively, Dawn; Whitman, Richard L.; Haack, Sheridan K.; Fogarty, Lisa R.

    2008-01-01

    Regression analyses and hydrodynamic modeling were used to identify environmental factors and flow paths associated with Escherichia coli (E. coli) concentrations at Memorial and Metropolitan Beaches on Lake St. Clair in Macomb County, Mich. Lake St. Clair is part of the binational waterway between the United States and Canada that connects Lake Huron with Lake Erie in the Great Lakes Basin. Linear regression, regression-tree, and logistic regression models were developed from E. coli concentration and ancillary environmental data. Linear regression models on log10 E. coli concentrations indicated that rainfall prior to sampling, water temperature, and turbidity were positively associated with bacteria concentrations at both beaches. Flow from Clinton River, changes in water levels, wind conditions, and log10 E. coli concentrations 2 days before or after the target bacteria concentrations were statistically significant at one or both beaches. In addition, various interaction terms were significant at Memorial Beach. Linear regression models for both beaches explained only about 30 percent of the variability in log10 E. coli concentrations. Regression-tree models were developed from data from both Memorial and Metropolitan Beaches but were found to have limited predictive capability in this study. The results indicate that too few observations were available to develop reliable regression-tree models. Linear logistic models were developed to estimate the probability of E. coli concentrations exceeding 300 most probable number (MPN) per 100 milliliters (mL). Rainfall amounts before bacteria sampling were positively associated with exceedance probabilities at both beaches. Flow of Clinton River, turbidity, and log10 E. coli concentrations measured before or after the target E. coli measurements were related to exceedances at one or both beaches. The linear logistic models were effective in estimating bacteria exceedances at both beaches. A receiver operating characteristic (ROC) analysis was used to determine cut points for maximizing the true positive rate prediction while minimizing the false positive rate. A two-dimensional hydrodynamic model was developed to simulate horizontal current patterns on Lake St. Clair in response to wind, flow, and water-level conditions at model boundaries. Simulated velocity fields were used to track hypothetical massless particles backward in time from the beaches along flow paths toward source areas. Reverse particle tracking for idealized steady-state conditions shows changes in expected flow paths and traveltimes with wind speeds and directions from 24 sectors. The results indicate that three to four sets of contiguous wind sectors have similar effects on flow paths in the vicinity of the beaches. In addition, reverse particle tracking was used for transient conditions to identify expected flow paths for 10 E. coli sampling events in 2004. These results demonstrate the ability to track hypothetical particles from the beaches, backward in time, to likely source areas. This ability, coupled with a greater frequency of bacteria sampling, may provide insight into changes in bacteria concentrations between source and sink areas.

  10. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  11. A horse’s locomotor signature: COP path determined by the individual limb

    PubMed Central

    Hobbs, Sarah Jane; Back, Willem

    2017-01-01

    Introduction Ground reaction forces in sound horses with asymmetric hooves show systematic differences in the horizontal braking force and relative timing of break-over. The Center Of Pressure (COP) path quantifies the dynamic load distribution under the hoof in a moving horse. The objective was to test whether anatomical asymmetry, quantified by the difference in dorsal wall angle between the left and right forelimbs, correlates with asymmetry in the COP path between these limbs. In addition, repeatability of the COP path was investigated. Methods A larger group (n = 31) visually sound horses with various degree of dorsal hoof wall asymmetry trotted three times over a pressure mat. COP path was determined in a hoof-bound coordinate system. A relationship between correlations between left and right COP paths and degree of asymmetry was investigated. Results Using a hoof-bound coordinate system made the COP path highly repeatable and unique for each limb. The craniocaudal patterns are usually highly correlated between left and right, but the mediolateral patterns are not. Some patterns were found between COP path and dorsal wall angle but asymmetry in dorsal wall angle did not necessarily result in asymmetry in COP path and the same could be stated for symmetry. Conclusion This method is a highly sensitive method to quantify the net result of the interaction between all of the forces and torques that occur in the limb and its inertial properties. We argue that changes in motor control, muscle force, inertial properties, kinematics and kinetics can potentially be picked up at an early stage using this method and could therefore be used as an early detection method for changes in the musculoskeletal apparatus. PMID:28196073

  12. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  13. Determining Dynamical Path Distributions usingMaximum Relative Entropy

    DTIC Science & Technology

    2015-05-31

    entropy to a one-dimensional continuum labeled by a parameter η. The resulting η-entropies are equivalent to those proposed by Renyi [12] or by Tsallis [13...1995). [12] A. Renyi , “On measures of entropy and information,”Proc. 4th Berkeley Simposium on Mathematical Statistics and Probability, Vol 1, p. 547-461

  14. Decentralized Routing and Diameter Bounds in Entangled Quantum Networks

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    2017-04-01

    Entangled quantum networks are a necessity for any future quantum internet, long-distance quantum key distribution, and quantum repeater networks. The entangled quantum nodes can communicate through several different levels of entanglement, leading to a heterogeneous, multi-level entangled network structure. The level of entanglement between the quantum nodes determines the hop distance, the number of spanned nodes, and the probability of the existence of an entangled link in the network. In this work we define a decentralized routing for entangled quantum networks. We show that the probability distribution of the entangled links can be modeled by a specific distribution in a base-graph. The results allow us to perform efficient routing to find the shortest paths in entangled quantum networks by using only local knowledge of the quantum nodes. We give bounds on the maximum value of the total number of entangled links of a path. The proposed scheme can be directly applied in practical quantum communications and quantum networking scenarios. This work was partially supported by the Hungarian Scientific Research Fund - OTKA K-112125.

  15. Evaluating a linearized Euler equations model for strong turbulence effects on sound propagation.

    PubMed

    Ehrhardt, Loïc; Cheinet, Sylvain; Juvé, Daniel; Blanc-Benon, Philippe

    2013-04-01

    Sound propagation outdoors is strongly affected by atmospheric turbulence. Under strongly perturbed conditions or long propagation paths, the sound fluctuations reach their asymptotic behavior, e.g., the intensity variance progressively saturates. The present study evaluates the ability of a numerical propagation model based on the finite-difference time-domain solving of the linearized Euler equations in quantitatively reproducing the wave statistics under strong and saturated intensity fluctuations. It is the continuation of a previous study where weak intensity fluctuations were considered. The numerical propagation model is presented and tested with two-dimensional harmonic sound propagation over long paths and strong atmospheric perturbations. The results are compared to quantitative theoretical or numerical predictions available on the wave statistics, including the log-amplitude variance and the probability density functions of the complex acoustic pressure. The match is excellent for the evaluated source frequencies and all sound fluctuations strengths. Hence, this model captures these many aspects of strong atmospheric turbulence effects on sound propagation. Finally, the model results for the intensity probability density function are compared with a standard fit by a generalized gamma function.

  16. Extracting the differential inverse inelastic mean free path and differential surface excitation probability of Tungsten from X-ray photoelectron spectra and electron energy loss spectra

    NASA Astrophysics Data System (ADS)

    Afanas'ev, V. P.; Gryazev, A. S.; Efremenko, D. S.; Kaplya, P. S.; Kuznetcova, A. V.

    2017-12-01

    Precise knowledge of the differential inverse inelastic mean free path (DIIMFP) and differential surface excitation probability (DSEP) of Tungsten is essential for many fields of material science. In this paper, a fitting algorithm is applied for extracting DIIMFP and DSEP from X-ray photoelectron spectra and electron energy loss spectra. The algorithm uses the partial intensity approach as a forward model, in which a spectrum is given as a weighted sum of cross-convolved DIIMFPs and DSEPs. The weights are obtained as solutions of the Riccati and Lyapunov equations derived from the invariant imbedding principle. The inversion algorithm utilizes the parametrization of DIIMFPs and DSEPs on the base of a classical Lorentz oscillator. Unknown parameters of the model are found by using the fitting procedure, which minimizes the residual between measured spectra and forward simulations. It is found that the surface layer of Tungsten contains several sublayers with corresponding Langmuir resonances. The thicknesses of these sublayers are proportional to the periods of corresponding Langmuir oscillations, as predicted by the theory of R.H. Ritchie.

  17. General formulation of long-range degree correlations in complex networks

    NASA Astrophysics Data System (ADS)

    Fujiki, Yuka; Takaguchi, Taro; Yakubo, Kousuke

    2018-06-01

    We provide a general framework for analyzing degree correlations between nodes separated by more than one step (i.e., beyond nearest neighbors) in complex networks. One joint and four conditional probability distributions are introduced to fully describe long-range degree correlations with respect to degrees k and k' of two nodes and shortest path length l between them. We present general relations among these probability distributions and clarify the relevance to nearest-neighbor degree correlations. Unlike nearest-neighbor correlations, some of these probability distributions are meaningful only in finite-size networks. Furthermore, as a baseline to determine the existence of intrinsic long-range degree correlations in a network other than inevitable correlations caused by the finite-size effect, the functional forms of these probability distributions for random networks are analytically evaluated within a mean-field approximation. The utility of our argument is demonstrated by applying it to real-world networks.

  18. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  19. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-01-01

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481

  20. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  1. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-05-11

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.

  2. Using hidden Markov models to align multiple sequences.

    PubMed

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  3. Cytological Evaluation and REBA HPV-ID HPV Testing of Newly Developed Liquid-Based Cytology, EASYPREP: Comparison with SurePath.

    PubMed

    Lee, Youn Soo; Gong, Gyungyub; Sohn, Jin Hee; Ryu, Ki Sung; Lee, Jung Hun; Khang, Shin Kwang; Cho, Kyung-Ja; Kim, Yong-Man; Kang, Chang Suk

    2013-06-01

    The objective of this study was to evaluate a newly-developed EASYPREP liquid-based cytology method in cervicovaginal specimens and compare it with SurePath. Cervicovaginal specimens were prospectively collected from 1,000 patients with EASYPREP and SurePath. The specimens were first collected by brushing for SurePath and second for EASYPREP. The specimens of both methods were diagnosed according to the Bethesda System. Additionally, we performed to REBA HPV-ID genotyping and sequencing analysis for human papillomavirus (HPV) on 249 specimens. EASYPREP and SurePath showed even distribution of cells and were equal in cellularity and staining quality. The diagnostic agreement between the two methods was 96.5%. Based on the standard of SurePath, the sensitivity, specificity, positive predictive value, and negative predictive value of EASYPREP were 90.7%, 99.2%, 94.8%, and 98.5%, respectively. The positivity of REBA HPV-ID was 49.4% and 95.1% in normal and abnormal cytological samples, respectively. The result of REBA HPV-ID had high concordance with sequencing analysis. EASYPREP provided comparable results to SurePath in the diagnosis and staining quality of cytology examinations and in HPV testing with REBA HPV-ID. EASYPREP could be another LBC method choice for the cervicovaginal specimens. Additionally, REBA HPV-ID may be a useful method for HPV genotyping.

  4. Flight 20 (STS-45) polysulfide gas path investigation

    NASA Technical Reports Server (NTRS)

    Bjorkman, Rey C.; Bown, Charles W.; Smith, Scott D.; Walters, Jerry L.; Kulkarni, Suresh B.; Cook, Roger V.; Sebahar, David A.; Walker, Craig S.; Haddock, M. Reed; Lindstrom, Robert E.

    1992-01-01

    This report documents the results of the investigation into causes of gas paths on the 20A and 20B case-to-nozzle joints on STS-42. The investigation was conducted by the Investigation Board appointed by the senior vice president and general manager of Space Operations, Mr. R. E. Lindstrom, on 7 Feb. 1992. The probability of gas path occurrence in the nozzle-to-case-joint polysulfide had been identified during joint redesign. However, actual flight gas path incidence has been limited to RSRM-11 and the 20A and 20B segments. The blow-by condition on the 20A segment was a first time occurrence which was a special concern. The investigation covered all technical aspects associated with the gas path and blow-by conditions: materials and processing history, design requirements and as-built compliance to the design, thermal and structural analyses, computer modeling, and laboratory experimentation with the materials involved. The investigation was coordinated with Mr. Ken Jones at NASA Marshall in bi-weekly teleconferences. The Board also supported Dr. James C. Blair's independent NASA investigation team by providing copies of collected data, conducting requested analyses, and supporting several all-day teleconferences to provide understanding and resolve issues. The Dr. Blair support requirement was successfully concluded on 4 Mar. 1992.

  5. Drift-Induced Selection Between Male and Female Heterogamety.

    PubMed

    Veller, Carl; Muralidhar, Pavitra; Constable, George W A; Nowak, Martin A

    2017-10-01

    Evolutionary transitions between male and female heterogamety are common in both vertebrates and invertebrates. Theoretical studies of these transitions have found that, when all genotypes are equally fit, continuous paths of intermediate equilibria link the two sex chromosome systems. This observation has led to a belief that neutral evolution along these paths can drive transitions, and that arbitrarily small fitness differences among sex chromosome genotypes can determine the system to which evolution leads. Here, we study stochastic evolutionary dynamics along these equilibrium paths. We find non-neutrality, both in transitions retaining the ancestral pair of sex chromosomes, and in those creating a new pair. In fact, substitution rates are biased in favor of dominant sex determining chromosomes, which fix with higher probabilities than mutations of no effect. Using diffusion approximations, we show that this non-neutrality is a result of "drift-induced selection" operating at every point along the equilibrium paths: stochastic jumps off the paths return with, on average, a directional bias in favor of the dominant segregating sex chromosome. Our results offer a novel explanation for the observed preponderance of dominant sex determining genes, and hint that drift-induced selection may be a common force in standard population genetic systems. Copyright © 2017 by the Genetics Society of America.

  6. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  7. Daytime identification of summer hailstorm cells from MSG data

    NASA Astrophysics Data System (ADS)

    Merino, A.; López, L.; Sánchez, J. L.; García-Ortega, E.; Cattani, E.; Levizzani, V.

    2014-04-01

    Identifying deep convection is of paramount importance, as it may be associated with extreme weather phenomena that have significant impact on the environment, property and populations. A new method, the hail detection tool (HDT), is described for identifying hail-bearing storms using multispectral Meteosat Second Generation (MSG) data. HDT was conceived as a two-phase method, in which the first step is the convective mask (CM) algorithm devised for detection of deep convection, and the second a hail mask algorithm (HM) for the identification of hail-bearing clouds among cumulonimbus systems detected by CM. Both CM and HM are based on logistic regression models trained with multispectral MSG data sets comprised of summer convective events in the middle Ebro Valley (Spain) between 2006 and 2010, and detected by the RGB (red-green-blue) visualization technique (CM) or C-band weather radar system of the University of León. By means of the logistic regression approach, the probability of identifying a cumulonimbus event with CM or a hail event with HM are computed by exploiting a proper selection of MSG wavelengths or their combination. A number of cloud physical properties (liquid water path, optical thickness and effective cloud drop radius) were used to physically interpret results of statistical models from a meteorological perspective, using a method based on these "ingredients". Finally, the HDT was applied to a new validation sample consisting of events during summer 2011. The overall probability of detection was 76.9 % and the false alarm ratio 16.7 %.

  8. Day-time identification of summer hailstorm cells from MSG data

    NASA Astrophysics Data System (ADS)

    Merino, A.; López, L.; Sánchez, J. L.; García-Ortega, E.; Cattani, E.; Levizzani, V.

    2013-10-01

    Identifying deep convection is of paramount importance, as it may be associated with extreme weather that has significant impact on the environment, property and the population. A new method, the Hail Detection Tool (HDT), is described for identifying hail-bearing storms using multi-spectral Meteosat Second Generation (MSG) data. HDT was conceived as a two-phase method, in which the first step is the Convective Mask (CM) algorithm devised for detection of deep convection, and the second a Hail Detection algorithm (HD) for the identification of hail-bearing clouds among cumulonimbus systems detected by CM. Both CM and HD are based on logistic regression models trained with multi-spectral MSG data-sets comprised of summer convective events in the middle Ebro Valley between 2006-2010, and detected by the RGB visualization technique (CM) or C-band weather radar system of the University of León. By means of the logistic regression approach, the probability of identifying a cumulonimbus event with CM or a hail event with HD are computed by exploiting a proper selection of MSG wavelengths or their combination. A number of cloud physical properties (liquid water path, optical thickness and effective cloud drop radius) were used to physically interpret results of statistical models from a meteorological perspective, using a method based on these "ingredients." Finally, the HDT was applied to a new validation sample consisting of events during summer 2011. The overall Probability of Detection (POD) was 76.9% and False Alarm Ratio 16.7%.

  9. Study on high-resolution representation of terraces in Shanxi Loess Plateau area

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ma, Lei

    2008-10-01

    A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.

  10. Roads at risk - the impact of debris flows on road network reliability and vulnerability in southern Norway

    NASA Astrophysics Data System (ADS)

    Meyer, Nele Kristin; Schwanghart, Wolfgang; Korup, Oliver

    2014-05-01

    Norwegian's road network is frequently affected by debris flows. Both damage repair and traffic interruption generate high economic losses and necessitate a rigorous assessment of where losses are expected to be high and where preventive measures should be focused on. In recent studies, we have developed susceptibility and trigger probability maps that serve as input into a hazard calculation at the scale of first-order watersheds. Here we combine these results with graph theory to assess the impact of debris flows on the road network of southern Norway. Susceptibility and trigger probability are aggregated for individual road sections to form a reliability index that relates to the failure probability of a link that connects two network vertices, e.g., road junctions. We define link vulnerability as a function of traffic volume and additional link failure distance. Additional link failure distance is the extra length of the alternative path connecting the two associated link vertices in case the network link fails and is calculated by a shortest-path algorithm. The product of network reliability and vulnerability indices represent the risk index. High risk indices identify critical links for the Norwegian road network and are investigated in more detail. Scenarios demonstrating the impact of single or multiple debris flow events are run for the most important routes between seven large cities in southern Norway. First results show that the reliability of the road network is lowest in the central and north-western part of the study area. Road network vulnerability is highest in the mountainous regions in central southern Norway where the road density is low and in the vicinity of cities where the traffic volume is large. The scenarios indicate that city connections that have their shortest path via routes crossing the central part of the study area have the highest risk of route failure.

  11. Dynamical mechanism in aero-engine gas path system using minimum spanning tree and detrended cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Zhang, Hong; Gao, You

    2017-01-01

    Identifying the mutual interaction in aero-engine gas path system is a crucial problem that facilitates the understanding of emerging structures in complex system. By employing the multiscale multifractal detrended cross-correlation analysis method to aero-engine gas path system, the cross-correlation characteristics between gas path system parameters are established. Further, we apply multiscale multifractal detrended cross-correlation distance matrix and minimum spanning tree to investigate the mutual interactions of gas path variables. The results can infer that the low-spool rotor speed (N1) and engine pressure ratio (EPR) are main gas path parameters. The application of proposed method contributes to promote our understanding of the internal mechanisms and structures of aero-engine dynamics.

  12. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  13. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2014-12-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOC's) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using 3 or 4 points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  14. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. An Empirical Method Permitting Rapid Determination of the Area, Rate and Distribution of Water-Drop Impingement on an Airfoil of Arbitrary Section at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Bergrun, N. R.

    1951-01-01

    An empirical method for the determination of the area, rate, and distribution of water-drop impingement on airfoils of arbitrary section is presented. The procedure represents an initial step toward the development of a method which is generally applicable in the design of thermal ice-prevention equipment for airplane wing and tail surfaces. Results given by the proposed empirical method are expected to be sufficiently accurate for the purpose of heated-wing design, and can be obtained from a few numerical computations once the velocity distribution over the airfoil has been determined. The empirical method presented for incompressible flow is based on results of extensive water-drop. trajectory computations for five airfoil cases which consisted of 15-percent-thick airfoils encompassing a moderate lift-coefficient range. The differential equations pertaining to the paths of the drops were solved by a differential analyzer. The method developed for incompressible flow is extended to the calculation of area and rate of impingement on straight wings in subsonic compressible flow to indicate the probable effects of compressibility for airfoils at low subsonic Mach numbers.

  16. scEpath: Energy landscape-based inference of transition probabilities and cellular trajectories from single-cell transcriptomic data.

    PubMed

    Jin, Suoqin; MacLean, Adam L; Peng, Tao; Nie, Qing

    2018-02-05

    Single-cell RNA-sequencing (scRNA-seq) offers unprecedented resolution for studying cellular decision-making processes. Robust inference of cell state transition paths and probabilities is an important yet challenging step in the analysis of these data. Here we present scEpath, an algorithm that calculates energy landscapes and probabilistic directed graphs in order to reconstruct developmental trajectories. We quantify the energy landscape using "single-cell energy" and distance-based measures, and find that the combination of these enables robust inference of the transition probabilities and lineage relationships between cell states. We also identify marker genes and gene expression patterns associated with cell state transitions. Our approach produces pseudotemporal orderings that are - in combination - more robust and accurate than current methods, and offers higher resolution dynamics of the cell state transitions, leading to new insight into key transition events during differentiation and development. Moreover, scEpath is robust to variation in the size of the input gene set, and is broadly unsupervised, requiring few parameters to be set by the user. Applications of scEpath led to the identification of a cell-cell communication network implicated in early human embryo development, and novel transcription factors important for myoblast differentiation. scEpath allows us to identify common and specific temporal dynamics and transcriptional factor programs along branched lineages, as well as the transition probabilities that control cell fates. A MATLAB package of scEpath is available at https://github.com/sqjin/scEpath. qnie@uci.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.

  17. Path Planning Method in Multi-obstacle Marine Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Jinpeng; Sun, Hanxv

    2017-12-01

    In this paper, an improved algorithm for particle swarm optimization is proposed for the application of underwater robot in the complex marine environment. Not only did consider to avoid obstacles when path planning, but also considered the current direction and the size effect on the performance of the robot dynamics. The algorithm uses the trunk binary tree structure to construct the path search space and A * heuristic search method is used in the search space to find a evaluation standard path. Then the particle swarm algorithm to optimize the path by adjusting evaluation function, which makes the underwater robot in the current navigation easier to control, and consume less energy.

  18. Path Analysis and Residual Plotting as Methods of Environmental Scanning in Higher Education: An Illustration with Applications and Enrollments.

    ERIC Educational Resources Information Center

    Morcol, Goktug; McLaughlin, Gerald W.

    1990-01-01

    The study proposes using path analysis and residual plotting as methods supporting environmental scanning in strategic planning for higher education institutions. Path models of three levels of independent variables are developed. Dependent variables measuring applications and enrollments at Virginia Polytechnic Institute and State University are…

  19. Architecture of marine food webs: To be or not be a 'small-world'.

    PubMed

    Marina, Tomás Ignacio; Saravia, Leonardo A; Cordone, Georgina; Salinas, Vanesa; Doyle, Santiago R; Momo, Fernando R

    2018-01-01

    The search for general properties in network structure has been a central issue for food web studies in recent years. One such property is the small-world topology that combines a high clustering and a small distance between nodes of the network. This property may increase food web resilience but make them more sensitive to the extinction of connected species. Food web theory has been developed principally from freshwater and terrestrial ecosystems, largely omitting marine habitats. If theory needs to be modified to accommodate observations from marine ecosystems, based on major differences in several topological characteristics is still on debate. Here we investigated if the small-world topology is a common structural pattern in marine food webs. We developed a novel, simple and statistically rigorous method to examine the largest set of complex marine food webs to date. More than half of the analyzed marine networks exhibited a similar or lower characteristic path length than the random expectation, whereas 39% of the webs presented a significantly higher clustering than its random counterpart. Our method proved that 5 out of 28 networks fulfilled both features of the small-world topology: short path length and high clustering. This work represents the first rigorous analysis of the small-world topology and its associated features in high-quality marine networks. We conclude that such topology is a structural pattern that is not maximized in marine food webs; thus it is probably not an effective model to study robustness, stability and feasibility of marine ecosystems.

  20. Causal Mediation Analysis of Survival Outcome with Multiple Mediators.

    PubMed

    Huang, Yen-Tsung; Yang, Hwai-I

    2017-05-01

    Mediation analyses have been a popular approach to investigate the effect of an exposure on an outcome through a mediator. Mediation models with multiple mediators have been proposed for continuous and dichotomous outcomes. However, development of multimediator models for survival outcomes is still limited. We present methods for multimediator analyses using three survival models: Aalen additive hazard models, Cox proportional hazard models, and semiparametric probit models. Effects through mediators can be characterized by path-specific effects, for which definitions and identifiability assumptions are provided. We derive closed-form expressions for path-specific effects for the three models, which are intuitively interpreted using a causal diagram. Mediation analyses using Cox models under the rare-outcome assumption and Aalen additive hazard models consider effects on log hazard ratio and hazard difference, respectively; analyses using semiparametric probit models consider effects on difference in transformed survival time and survival probability. The three models were applied to a hepatitis study where we investigated effects of hepatitis C on liver cancer incidence mediated through baseline and/or follow-up hepatitis B viral load. The three methods show consistent results on respective effect scales, which suggest an adverse estimated effect of hepatitis C on liver cancer not mediated through hepatitis B, and a protective estimated effect mediated through the baseline (and possibly follow-up) of hepatitis B viral load. Causal mediation analyses of survival outcome with multiple mediators are developed for additive hazard and proportional hazard and probit models with utility demonstrated in a hepatitis study.

  1. Link prediction based on local weighted paths for complex networks

    NASA Astrophysics Data System (ADS)

    Yao, Yabing; Zhang, Ruisheng; Yang, Fan; Yuan, Yongna; Hu, Rongjing; Zhao, Zhili

    As a significant problem in complex networks, link prediction aims to find the missing and future links between two unconnected nodes by estimating the existence likelihood of potential links. It plays an important role in understanding the evolution mechanism of networks and has broad applications in practice. In order to improve prediction performance, a variety of structural similarity-based methods that rely on different topological features have been put forward. As one topological feature, the path information between node pairs is utilized to calculate the node similarity. However, many path-dependent methods neglect the different contributions of paths for a pair of nodes. In this paper, a local weighted path (LWP) index is proposed to differentiate the contributions between paths. The LWP index considers the effect of the link degrees of intermediate links and the connectivity influence of intermediate nodes on paths to quantify the path weight in the prediction procedure. The experimental results on 12 real-world networks show that the LWP index outperforms other seven prediction baselines.

  2. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    NASA Astrophysics Data System (ADS)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the framework of finding global minimum if several local minimums exist. Application to resolved resonance, unresolved resonance and continuum evaluation as well as multigroup cross section data assimilation will be presented.

  3. Integrating Geologic, Geochemical and Geophysical Data in a Statistical Analysis of Geothermal Resource Probability across the State of Hawaii

    NASA Astrophysics Data System (ADS)

    Lautze, N. C.; Ito, G.; Thomas, D. M.; Hinz, N.; Frazer, L. N.; Waller, D.

    2015-12-01

    Hawaii offers the opportunity to gain knowledge and develop geothermal energy on the only oceanic hotspot in the U.S. As a remote island state, Hawaii is more dependent on imported fossil fuel than any other state in the U.S., and energy prices are 3 to 4 times higher than the national average. The only proven resource, located on Hawaii Island's active Kilauea volcano, is a region of high geologic risk; other regions of probable resource exist but lack adequate assessment. The last comprehensive statewide geothermal assessment occurred in 1983 and found a potential resource on all islands (Hawaii Institute of Geophysics, 1983). Phase 1 of a Department of Energy funded project to assess the probability of geothermal resource potential statewide in Hawaii was recently completed. The execution of this project was divided into three main tasks: (1) compile all historical and current data for Hawaii that is relevant to geothermal resources into a single Geographic Information System (GIS) project; (2) analyze and rank these datasets in terms of their relevance to the three primary properties of a viable geothermal resource: heat (H), fluid (F), and permeability (P); and (3) develop and apply a Bayesian statistical method to incorporate the ranks and produce probability models that map out Hawaii's geothermal resource potential. Here, we summarize the project methodology and present maps that highlight both high prospect areas as well as areas that lack enough data to make an adequate assessment. We suggest a path for future exploration activities in Hawaii, and discuss how this method of analysis can be adapted to other regions and other types of resources. The figure below shows multiple layers of GIS data for Hawaii Island. Color shades indicate crustal density anomalies produced from inversions of gravity (Flinders et al. 2013). Superimposed on this are mapped calderas, rift zones, volcanic cones, and faults (following Sherrod et al., 2007). These features were used to identify probable locations of intrusive rock (heat) and permeability.

  4. Phase selection during crystallization of undercooled liquid eutectic lead-tin alloys

    NASA Technical Reports Server (NTRS)

    Fecht, H. J.

    1991-01-01

    During rapid solidification substantial amounts of undercooling are in general required for formation of metastable phases. Crystallization at varying levels of undercooling and melting of metastable phases were studied during slow cooling and heating of emulsified PB-Sn alloys. Besides the experimental demonstration of the reversibility of metastable phase equilibra, two different principal solidification paths have been identified and compared with the established metastable phase diagram and predictions from classical nucleation theory. The results suggest that the most probable solidification path is described by the 'step rule' resulting in the formation of metastable phases at low undercooling, whereas the stable eutectic phase mixture crystallizes without metastable phase formation at high undercooling.

  5. Channel Capacity Calculation at Large SNR and Small Dispersion within Path-Integral Approach

    NASA Astrophysics Data System (ADS)

    Reznichenko, A. V.; Terekhov, I. S.

    2018-04-01

    We consider the optical fiber channel modelled by the nonlinear Shrödinger equation with additive white Gaussian noise. Using Feynman path-integral approach for the model with small dispersion we find the first nonzero corrections to the conditional probability density function and the channel capacity estimations at large signal-to-noise ratio. We demonstrate that the correction to the channel capacity in small dimensionless dispersion parameter is quadratic and positive therefore increasing the earlier calculated capacity for a nondispersive nonlinear optical fiber channel in the intermediate power region. Also for small dispersion case we find the analytical expressions for simple correlators of the output signals in our noisy channel.

  6. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  7. Hot gas path component having near wall cooling features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda, Carlos Miguel; Kottilingam, Srikanth Chandrudu; Lacy, Benjamin Paul

    A method for providing micro-channels in a hot gas path component includes forming a first micro-channel in an exterior surface of a substrate of the hot gas path component. A second micro-channel is formed in the exterior surface of the hot gas path component such that it is separated from the first micro-channel by a surface gap having a first width. The method also includes disposing a braze sheet onto the exterior surface of the hot gas path component such that the braze sheet covers at least of portion of the first and second micro-channels, and heating the braze sheetmore » to bond it to at least a portion of the exterior surface of the hot gas path component.« less

  8. Semi-Automated Trajectory Analysis of Deep Ballistic Penetrating Brain Injury

    PubMed Central

    Folio, Les; Solomon, Jeffrey; Biassou, Nadia; Fischer, Tatjana; Dworzak, Jenny; Raymont, Vanessa; Sinaii, Ninet; Wassermann, Eric M.; Grafman, Jordan

    2016-01-01

    Background Penetrating head injuries (PHIs) are common in combat operations and most have visible wound paths on computed tomography (CT). Objective We assess agreement between an automated trajectory analysis-based assessment of brain injury and manual tracings of encephalomalacia on CT. Methods We analyzed 80 head CTs with ballistic PHI from the Institutional Review Board approved Vietnam head injury registry. Anatomic reports were generated from spatial coordinates of projectile entrance and terminal fragment location. These were compared to manual tracings of the regions of encephalomalacia. Dice’s similarity coefficients, kappa, sensitivities, and specificities were calculated to assess agreement. Times required for case analysis were also compared. Results Results show high specificity of anatomic regions identified on CT with semiautomated anatomical estimates and manual tracings of tissue damage. Radiologist’s and medical students’ anatomic region reports were similar (Kappa 0.8, t-test p < 0.001). Region of probable injury modeling of involved brain structures was sensitive (0.7) and specific (0.9) compared with manually traced structures. Semiautomated analysis was 9-fold faster than manual tracings. Conclusion Our region of probable injury spatial model approximates anatomical regions of encephalomalacia from ballistic PHI with time-saving over manual methods. Results show potential for automated anatomical reporting as an adjunct to current practice of radiologist/neurosurgical review of brain injury by penetrating projectiles. PMID:23707123

  9. Modeling Drinking Behavior Progression in Youth: a Non-identified Probability Discrete Event System Using Cross-sectional Data

    PubMed Central

    Hu, Xingdi; Chen, Xinguang; Cook, Robert L.; Chen, Ding-Geng; Okafor, Chukwuemeka

    2016-01-01

    Background The probabilistic discrete event systems (PDES) method provides a promising approach to study dynamics of underage drinking using cross-sectional data. However, the utility of this approach is often limited because the constructed PDES model is often non-identifiable. The purpose of the current study is to attempt a new method to solve the model. Methods A PDES-based model of alcohol use behavior was developed with four progression stages (never-drinkers [ND], light/moderate-drinker [LMD], heavy-drinker [HD], and ex-drinker [XD]) linked with 13 possible transition paths. We tested the proposed model with data for participants aged 12–21 from the 2012 National Survey on Drug Use and Health (NSDUH). The Moore-Penrose (M-P) generalized inverse matrix method was applied to solve the proposed model. Results Annual transitional probabilities by age groups for the 13 drinking progression pathways were successfully estimated with the M-P generalized inverse matrix approach. Result from our analysis indicates an inverse “J” shape curve characterizing pattern of experimental use of alcohol from adolescence to young adulthood. We also observed a dramatic increase for the initiation of LMD and HD after age 18 and a sharp decline in quitting light and heavy drinking. Conclusion Our findings are consistent with the developmental perspective regarding the dynamics of underage drinking, demonstrating the utility of the M-P method in obtaining a unique solution for the partially-observed PDES drinking behavior model. The M-P approach we tested in this study will facilitate the use of the PDES approach to examine many health behaviors with the widely available cross-sectional data. PMID:26511344

  10. Distributed Denial of Service Attack Source Detection Using Efficient Traceback Technique (ETT) in Cloud-Assisted Healthcare Environment.

    PubMed

    Latif, Rabia; Abbas, Haider; Latif, Seemab; Masood, Ashraf

    2016-07-01

    Security and privacy are the first and foremost concerns that should be given special attention when dealing with Wireless Body Area Networks (WBANs). As WBAN sensors operate in an unattended environment and carry critical patient health information, Distributed Denial of Service (DDoS) attack is one of the major attacks in WBAN environment that not only exhausts the available resources but also influence the reliability of information being transmitted. This research work is an extension of our previous work in which a machine learning based attack detection algorithm is proposed to detect DDoS attack in WBAN environment. However, in order to avoid complexity, no consideration was given to the traceback mechanism. During traceback, the challenge lies in reconstructing the attack path leading to identify the attack source. Among existing traceback techniques, Probabilistic Packet Marking (PPM) approach is the most commonly used technique in conventional IP- based networks. However, since marking probability assignment has significant effect on both the convergence time and performance of a scheme, it is not directly applicable in WBAN environment due to high convergence time and overhead on intermediate nodes. Therefore, in this paper we have proposed a new scheme called Efficient Traceback Technique (ETT) based on Dynamic Probability Packet Marking (DPPM) approach and uses MAC header in place of IP header. Instead of using fixed marking probability, the proposed scheme uses variable marking probability based on the number of hops travelled by a packet to reach the target node. Finally, path reconstruction algorithms are proposed to traceback an attacker. Evaluation and simulation results indicate that the proposed solution outperforms fixed PPM in terms of convergence time and computational overhead on nodes.

  11. USDA Forest Products Laboratory's Debris Launcher

    Treesearch

    James J. Bridwell; Robert J. Ross; Zhiyong Cai; David E. Kretschmann

    2013-01-01

    Throughout the United States, hundreds of tornados and several hurricanes affect people’s livelihoods each year. These natural disasters not only cause structural damage to property, they also cause numerous injuries, and regrettably, far too many deaths of people caught in their path. In an effort to increase the probability of surviving the strong winds and...

  12. Universe creation from the third-quantized vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGuigan, M.

    1989-04-15

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  13. Universe creation from the third-quantized vacuum

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael

    1989-04-01

    Third quantization leads to a Hilbert space containing a third-quantized vacuum in which no universes are present as well as multiuniverse states. We consider the possibility of universe creation for the special case where the universe emerges in a no-particle state. The probability of such a creation is computed from both the path-integral and operator formalisms.

  14. Finding False Paths in Sequential Circuits

    NASA Astrophysics Data System (ADS)

    Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.

    2018-02-01

    Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.

  15. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  16. Path-integral analysis of the time delay for wave-packet scattering and the status of complex tunneling times

    NASA Astrophysics Data System (ADS)

    Sokolovski, D.; Connor, J. N. L.

    1990-12-01

    The wave-packet simulation (WPS) method for calculating the time a tunneling particle spends inside a one-dimensional potential barrier is reexamined using the Feynman path-integral technique. Following earlier work by Sokolovski and Baskin [Phys. Rev. A 36, 4604 (1987)], the tunneling (or traversal) time tTpack is defined as a matrix element of a classical nonlocal functional between two states that represent the initial and transmitted wave packets. These states do not lie on the same orbit in Hilbert space; as a result, tTpack is complex-valued. It is shown that RetTpack reduces to the standard WPS result, tTphase, for conditions similar to those employed in the conventional WPS analysis. Similarly, ImtTpack is shown to contain information about the energy dependence of the transmission probability. Under semiclassical conditions, ImtTpack reduces to the well-known Wentzel-Kramers-Brillouin expression for the tunneling time. It is shown there are different definitions for the traversal time of a classical moving object, whose size is comparable to the width of the region of interest. In the quantum case, these different definitions correspond to different ways of analyzing the WPS experiment. The path-integral approach demonstrates that the tunneling-time problem is one of understanding the physical significance of complex-valued off-orbit matrix elements of an operator or functional. The physical content of complex-valued tunneling times is discussed. It is emphasized that the use of complex tunneling times includes real-time approaches as a special case. Nevertheless, there is a limitation in the description of tunneling experiments using tunneling times, whether real or complex. The path-integral approach does not supply a universal traversal time, analogous to a classical time, that can be used in quantum situations. It is demonstrated that the often expressed hope of finding a well-defined and universal real tunneling time is erroneous.

  17. Quantification of the Intrusion Process at Kïlauea Volcano, Hawai'I

    NASA Astrophysics Data System (ADS)

    Wright, T. L.; Marsh, B. D.

    2014-12-01

    Knowing the time between initial intrusion and later eruption of a given volume of differentiated magma is key to evaluating the connections among magma transport and emplacement, solidification and differentiation, and melt extraction and eruption. Cooling rates for two Kïlauea lava lakes as well as known parent composition and residence times for intrusions that resulted in fractionated lavas later erupted on the East Rift Zone in 1955 (34 years) and 1977 (22 years) allow intrusion dimensions to be calculated. We model intrusions beneath Kïlauea's East Rift Zone near their point of separation from the magma transport path at ~ 5 km depth using Jaeger's (1957) method calibrated against Alae and Makaopuhi lava lakes with wallrock temperatures above the curie point at 450-550°C. Minimum thicknesses of 50-70 meters are found for intrusions that fed the two fractionated lavas, as well as for long-lived magma bodies identified from geodetic monitoring during many East Rift eruptions. These intrusions began as dikes, but probably became sills or laccolithic bodies that remained near the transport path. Short-lived intrusions also arrested near the magma transport path, but that retain a dike geometry, are hypothesized to serve as a trigger for the small but discrete increments of seaward movement on Kïlauea's south flank that characterize slow-slip earthquakes. Two additional thoughts arise from the quantitative modeling of magma cooling. First, long-term heating of the wallrock surrounding the horizontal East Rift Zone transport path slows the rate of cooling within the conduit, possibly contributing to the longevity of the East Rift eruption that began in 1983. Second, the combined effects of heating of the wall rock and ever-increasing magma supply rate from the mantle may have forced breakdown and widening of the vertical transport conduit, which could explain the 5-15-km deep long-period earthquake swarms beneath Kīlauea's summit between 1987 and 1992.

  18. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  19. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  20. An optically passive method that doubles the rate of 2-Ghz timing fiducials

    NASA Astrophysics Data System (ADS)

    Boni, R.; Kendrick, J.; Sorce, C.

    2017-08-01

    Solid-state optical comb-pulse generators provide a convenient and accurate method to include timing fiducials in a streak camera image for time base correction. Commercially available vertical-cavity surface-emitting lasers (VCSEL's) emitting in the visible currently in use can be modulated up to 2 GHz. An optically passive method is presented to interleave a time-delayed path of the 2-GHz comb with itself, producing a 4-GHz comb. This technique can be applied to VCSEL's with higher modulation rates. A fiber-delivered, randomly polarized 2-GHz VCSEL comb is polarization split into s-polarization and p-polarization paths. One path is time delayed relative to the other by twice the 2-GHz rate with +/-1-ps accuracy; the two paths then recombine at the fiber-coupled output. High throughput (>=90%) is achieved by carefully using polarization beam-splitting cubes, a total internal reflection beam-path-steering prism, and antireflection coatings. The glass path-length delay block and turning prism are optically contacted together. The beam polarizer cubes that split and recombine the paths are precision aligned and permanently cemented into place. We expect the palm-sized, inline fiber-coupled, comb-rate-doubling device to maintain its internal alignment indefinitely.

  1. Method and apparatus for executing a shift in a hybrid transmission

    DOEpatents

    Gupta, Pinaki; Kaminsky, Lawrence A; Demirovic, Besim

    2013-09-03

    A method for executing a transmission shift in a hybrid transmission including first and second electric machines includes executing a shift-through-neutral sequence from an initial transmission state to a target transmission state including executing an intermediate shift to neutral. Upon detecting a change in an output torque request while executing the shift-through-neutral sequence, possible recovery shift paths are identified. Available ones of the possible recovery shift paths are identified and a shift cost for each said available recovery shift path is evaluated. The available recovery shift path having a minimum shift cost is selected as a preferred recovery shift path and is executed to achieve a non-neutral transmission state.

  2. Safe Maritime Autonomous Path Planning in a High Sea State

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Quadrelli, Marco; Huntsberger, Terrance L.

    2014-01-01

    This paper presents a path planning method for sea surface vehicles that prevents capsizing and bow-diving in a high sea-state. A key idea is to use response amplitude operators (RAOs) or, in control terminology, the transfer functions from a sea state to a vessel's motion, in order to find a set of speeds and headings that results in excessive pitch and roll oscillations. This information is translated to arithmetic constraints on the ship's velocity, which are passed to a model predictive control (MPC)-based path planner to find a safe and optimal path that achieves specified goals. An obstacle avoidance capability is also added to the path planner. The proposed method is demonstrated by simulations.

  3. Computational path planner for product assembly in complex environments

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  4. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes.

    PubMed

    Cafaro, Carlo; Alsing, Paul M

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  5. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes

    NASA Astrophysics Data System (ADS)

    Cafaro, Carlo; Alsing, Paul M.

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  6. Sensitivity images for multi-view ultrasonic array inspection

    NASA Astrophysics Data System (ADS)

    Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony J.; Zhang, Jie; Wilcox, Paul D.; Kashubin, Artem; Cawley, Peter

    2018-04-01

    The multi-view total focusing method (TFM) is an imaging technique for ultrasonic full matrix array data that typically exploits ray paths with zero, one or two internal reflections in the inspected object and for all combinations of longitudinal and transverse modes. The fusion of this vast quantity of views is expected to increase the reliability of ultrasonic inspection; however, it is not trivial to determine which views and which areas are the most suited for the detection of a given type and orientation of defect. This work introduces sensitivity images that give the expected response of a defect in any part of the inspected object and for any view. These images are based on a ray-based analytical forward model. They can be used to determine which views and which areas lead to the highest probability of detection of the defect. They can also be used for quantitatively analyzing the effects of the parameters of the inspection (probe angle and position, for example) on the overall probability of detection. Finally, they can be used to rescale TFM images so that the different views have comparable amplitudes. This methodology is applied to experimental data and discussed.

  7. Likelihood of Entanglement when Materials are Dropped Vertically onto a Rotating PTO Knuckle.

    PubMed

    Schwab, Charles V; Rempe, Isaac J

    2017-11-20

    Power take-off (PTO) is a common method of transferring power from a tractor to a towed piece of machinery. The PTO is also a well-documented cause of severe and often permanent disabling injuries to farm operators. The physical conditions that cause entanglements are not well established. Several studies have explored the parameters of PTO entanglements as materials have been drawn across a rotating PTO knuckle to test for entanglement probability. The objective of this study was to determine probability of entanglement when materials are dropped vertically onto a PTO knuckle spinning at 540 rpm. A total of 360 randomized trials were conducted with ten replications for each of the six positions (center of yoke, edge of yoke rotating downward, edge of yoke rotating upward, center of cross, edge of cross rotating downward, and edge of cross rotating upward) and six different materials (woven cotton athletic shoe lace, cotton workboot lace, leather workboot lace, cotton twine, denim strip, and Tyvek strip). Not a single entanglement was recorded. Dramatic high-speed video imagery authenticated the material's motion and path as it interacted with the rotating PTO knuckle. Copyright© by the American Society of Agricultural Engineers.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significantmore » funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.« less

  9. Childhood Sexual Abuse and the Sociocultural Context of Sexual Risk Among Adult Latino Gay and Bisexual Men

    PubMed Central

    Neilands, Torsten B.; Díaz, Rafael

    2009-01-01

    Objectives. We sought to examine the relationships among childhood sexual abuse, social discrimination, psychological distress, and HIV-risk among Latino gay and bisexual men in the United States. Methods. Data were from a probability sample of 912 men from Miami, FL; Los Angeles, CA; and New York, NY. We used logistic regression and path analyses to examine direct and indirect effects of childhood sexual abuse on psychological distress and sexual risk behavior. Results. We found a 15.8% (95% confidence interval = 12.3%, 19.2%) prevalence of childhood sexual abuse. Not having sex before age 16 years and having consensual sex before age 16 years did not differ from each other in predicting any of the outcomes of interest. Forced sex was associated with a significantly increased risk for all outcomes. A path analyses yielded direct effects of childhood sexual abuse and exposure to homophobia during childhood and during adulthood on psychological distress and indirect effects on risky sexual behavior. Conclusions. HIV-risk patterns among Latino gay and bisexual men are related to childhood sexual abuse and a social context of discrimination, which combined lead to symptoms of psychological distress and participation in risky sexual situations that increase risky sexual behaviors associated with HIV acquisition. PMID:19372522

  10. Kinematics of the human mandible for different head postures.

    PubMed

    Visscher, C M; Huddleston Slater, J J; Lobbezoo, F; Naeije, M

    2000-04-01

    The influence of head posture on movement paths of the incisal point (IP) and of the mandibular condyles during free open-close movements was studied. Ten persons, without craniomandibular or cervical spine disorders, participated in the study. Open close mandibular movements were recorded with the head in five postures, viz., natural head posture, forward head posture, military posture, and lateroflexion to the right and to the left side, using the Oral Kinesiologic Analysis System (OKAS-3D). This study showed that in a military head posture, the opening movement path of the incisal point is shifted anteriorly relative to the path in a natural head posture. In a forward head posture, the movement path is shifted posteriorly whereas during lateroflexion, it deviates to the side the head has moved to. Moreover, the intra-articular distance in the temporomandibular joint during closing is smaller with the head in military posture and greater in forward head posture, as compared to the natural head posture. During lateroflexion, the intra-articular distance on the ipsilateral side is smaller. The influence of head posture upon the kinematics of the mandible is probably a manifestation of differences in mandibular loading in the different head postures.

  11. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    NASA Astrophysics Data System (ADS)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  12. Dynamic behavior of the interaction between epidemics and cascades on heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Jiang, Lurong; Jin, Xinyu; Xia, Yongxiang; Ouyang, Bo; Wu, Duanpo

    2014-12-01

    Epidemic spreading and cascading failure are two important dynamical processes on complex networks. They have been investigated separately for a long time. But in the real world, these two dynamics sometimes may interact with each other. In this paper, we explore a model combined with the SIR epidemic spreading model and a local load sharing cascading failure model. There exists a critical value of the tolerance parameter for which the epidemic with high infection probability can spread out and infect a fraction of the network in this model. When the tolerance parameter is smaller than the critical value, the cascading failure cuts off the abundance of paths and blocks the spreading of the epidemic locally. While the tolerance parameter is larger than the critical value, the epidemic spreads out and infects a fraction of the network. A method for estimating the critical value is proposed. In simulations, we verify the effectiveness of this method in the uncorrelated configuration model (UCM) scale-free networks.

  13. An atomic and molecular fluid model for efficient edge-plasma transport simulations at high densities

    NASA Astrophysics Data System (ADS)

    Rognlien, Thomas; Rensink, Marvin

    2016-10-01

    Transport simulations for the edge plasma of tokamaks and other magnetic fusion devices requires the coupling of plasma and recycling or injected neutral gas. There are various neutral models used for this purpose, e.g., atomic fluid model, a Monte Carlo particle models, transition/escape probability methods, and semi-analytic models. While the Monte Carlo method is generally viewed as the most accurate, it is time consuming, which becomes even more demanding for device simulations of high densities and size typical of fusion power plants because the neutral collisional mean-free path becomes very small. Here we examine the behavior of an extended fluid neutral model for hydrogen that includes both atoms and molecules, which easily includes nonlinear neutral-neutral collision effects. In addition to the strong charge-exchange between hydrogen atoms and ions, elastic scattering is included among all species. Comparisons are made with the DEGAS 2 Monte Carlo code. Work performed for U.S. DoE by LLNL under Contract DE-AC52-07NA27344.

  14. Whole-Genome Sequencing in Outbreak Analysis

    PubMed Central

    Turner, Stephen D.; Riley, Margaret F.; Petri, William A.; Hewlett, Erik L.

    2015-01-01

    SUMMARY In addition to the ever-present concern of medical professionals about epidemics of infectious diseases, the relative ease of access and low cost of obtaining, producing, and disseminating pathogenic organisms or biological toxins mean that bioterrorism activity should also be considered when facing a disease outbreak. Utilization of whole-genome sequencing (WGS) in outbreak analysis facilitates the rapid and accurate identification of virulence factors of the pathogen and can be used to identify the path of disease transmission within a population and provide information on the probable source. Molecular tools such as WGS are being refined and advanced at a rapid pace to provide robust and higher-resolution methods for identifying, comparing, and classifying pathogenic organisms. If these methods of pathogen characterization are properly applied, they will enable an improved public health response whether a disease outbreak was initiated by natural events or by accidental or deliberate human activity. The current application of next-generation sequencing (NGS) technology to microbial WGS and microbial forensics is reviewed. PMID:25876885

  15. Quantum Biometrics with Retinal Photon Counting

    NASA Astrophysics Data System (ADS)

    Loulakis, M.; Blatsios, G.; Vrettou, C. S.; Kominis, I. K.

    2017-10-01

    It is known that the eye's scotopic photodetectors, rhodopsin molecules, and their associated phototransduction mechanism leading to light perception, are efficient single-photon counters. We here use the photon-counting principles of human rod vision to propose a secure quantum biometric identification based on the quantum-statistical properties of retinal photon detection. The photon path along the human eye until its detection by rod cells is modeled as a filter having a specific transmission coefficient. Precisely determining its value from the photodetection statistics registered by the conscious observer is a quantum parameter estimation problem that leads to a quantum secure identification method. The probabilities for false-positive and false-negative identification of this biometric technique can readily approach 10-10 and 10-4, respectively. The security of the biometric method can be further quantified by the physics of quantum measurements. An impostor must be able to perform quantum thermometry and quantum magnetometry with energy resolution better than 10-9ℏ , in order to foil the device by noninvasively monitoring the biometric activity of a user.

  16. Comparison of carbon uptake estimates from forest inventory and Eddy-Covariance for a montane rainforest in central Sulawesi

    NASA Astrophysics Data System (ADS)

    Heimsch, Florian; Kreilein, Heiner; Rauf, Abdul; Knohl, Alexander

    2016-04-01

    Rainforests in general and montane rainforests in particular have rarely been studied over longer time periods. We aim to provide baseline information of a montane tropical forest's carbon uptake over time in order to quantify possible losses through land-use change. Thus we conducted a re-inventory of 22 10-year old forest inventory plots, giving us a rare opportunity to quantify carbon uptake over such a long time period by traditional methods. We discuss shortfalls of such techniques and why our estimate of 1.5 Mg/ha/a should be considered as the lower boundary and not the mean carbon uptake per year. At the same location as the inventory, CO2 fluxes were measured with the Eddy-Covariance technique. Measurements were conducted at 48m height with an LI 7500 open-path infrared gas analyser. We will compare carbon uptake estimates from these measurements to those of the more conventional inventory method and discuss, which factors are probably responsible for differences.

  17. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  18. Preparing Future Scholars for Academia and Beyond: A Mixed Method Investigation of Doctoral Students' Preparedness for Multiple Career Paths

    ERIC Educational Resources Information Center

    Cason, Jennifer

    2016-01-01

    This action research study is a mixed methods investigation of doctoral students' preparedness for multiple career paths. PhD students face two challenges preparing for multiple career paths: lack of preparation and limited engagement in conversations about the value of their research across multiple audiences. This study focuses on PhD students'…

  19. DOAS (differential optical absorption spectroscopy) urban pollution measurements

    NASA Astrophysics Data System (ADS)

    Stevens, Robert K.; Vossler, T. L.

    1991-05-01

    During July and August of 1990, a differential optical absorption spectrometer (DOAS) made by OPSIS Inc. was used to measure gaseous air pollutants over three separate open paths in Atlanta, GA. Over path 1 (1099 m) and path 2 (1824 m), ozone (03), sulfur dioxide (SO2) nitrogen dioxide (NO2), nitrous acid (HNO2) formaldehyde (HCHO), benzene, toluene, and o-xylene were measured. Nitric oxide (NO) and ammonia (NH3) were monitored over path 3 (143 m). The data quality and data capture depended on the compound being measured and the path over which it was measured. Data quality criteria for each compound were chosen such that the average relative standard deviation would be less than 25%. Data capture ranged from 43% for o-xylene for path 1 to 95% for ozone for path 2. Benzene, toluene, and o-xylene concentrations measured over path 2, which crossed over an interstate highway, were higher than concentrations measured over path 1, implicating emissions from vehicles on the highway as a significant source of these compounds. Federal Reference Method (FRN) instruments were located near the DOAS light receivers and measurements of 03, NO2, and NO were made concurrently with the DOAS. Correlation coefficients greater than 0.85 were obtained between the DOAS and FRM's; however, there was a difference between the mean values obtained by the two methods for 03 and NO. A gas chromatograph for measuring volatile organic compounds was operated next to the FRN's. Correlation coefficients of about 0.66 were obtained between the DOAS and GC measurements of benzene and o- xylene. However, the correlation coefficient between the DOAS and GC measurements of toluene averaged only 0.15 for the two DOAS measurement paths. The lack of correlation and other factors indicate the possibility of a localized source of toluene near the GC. In general, disagreements between the two measurement methods could be caused by atmospheric inhomogeneities or interferences in the DOAS and other methods.

  20. Control and instanton trajectories for random transitions in turbulent flows

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2011-12-01

    Many turbulent systems exhibit random switches between qualitatively different attractors. The transition between these bistable states is often an extremely rare event, that can not be computed through DNS, due to complexity limitations. We present results for the calculation of instanton trajectories (a control problem) between non-equilibrium stationary states (attractors) in the 2D stochastic Navier-Stokes equations. By representing the transition probability between two states using a path integral formulation, we can compute the most probable trajectory (instanton) joining two non-equilibrium stationary states. Technically, this is equivalent to the minimization of an action, which can be related to a fluid mechanics control problem.

  1. A Novel Friendly Jamming Scheme in Industrial Crowdsensing Networks against Eavesdropping Attack.

    PubMed

    Li, Xuran; Wang, Qiu; Dai, Hong-Ning; Wang, Hao

    2018-06-14

    Eavesdropping attack is one of the most serious threats in industrial crowdsensing networks. In this paper, we propose a novel anti-eavesdropping scheme by introducing friendly jammers to an industrial crowdsensing network. In particular, we establish a theoretical framework considering both the probability of eavesdropping attacks and the probability of successful transmission to evaluate the effectiveness of our scheme. Our framework takes into account various channel conditions such as path loss, Rayleigh fading, and the antenna type of friendly jammers. Our results show that using jammers in industrial crowdsensing networks can effectively reduce the eavesdropping risk while having no significant influence on legitimate communications.

  2. The Homotopic Probability Distribution and the Partition Function for the Entangled System Around a Ribbon Segment Chain

    NASA Astrophysics Data System (ADS)

    Qian, Shang-Wu; Gu, Zhi-Yu

    2001-12-01

    Using the Feynman's path integral with topological constraints arising from the presence of one singular line, we find the homotopic probability distribution P_L^n for the winding number n and the partition function P_L of the entangled system around a ribbon segment chain. We find that when the width of the ribbon segment chain 2a increases,the partition function exponentially decreases, whereas the free energy increases an amount, which is proportional to the square of the width. When the width tends to zero we obtain the same results as those of a single chain with one singular point.

  3. A two-stage broadcast message propagation model in social networks

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  4. Faraday Rotation: Effect of Magnetic Field Reversals

    NASA Astrophysics Data System (ADS)

    Melrose, D. B.

    2010-12-01

    The standard formula for the rotation measure (RM), which determines the position angle, ψ = RMλ2, due to Faraday rotation, includes contributions only from the portions of the ray path where the natural modes of the plasma are circularly polarized. In small regions of the ray path where the projection of the magnetic field on the ray path reverses sign (called QT regions) the modes are nearly linearly polarized. The neglect of QT regions in estimating RM is not well justified at frequencies below a transition frequency where mode coupling changes from strong to weak. By integrating the polarization transfer equation across a QT region in the latter limit, I estimate the additional contribution Δψ needed to correct this omission. In contrast with a result proposed by Broderick & Blandford, Δψ is small and probably unobservable. I identify a new source of circular polarization, due to mode coupling in an asymmetric QT region. I also identify a new circular-polarization-dependent correction to the dispersion measure at low frequencies.

  5. Are Negative Peer Influences Domain Specific? Examining the Influence of Peers and Parents on Externalizing and Drug Use Behaviors.

    PubMed

    Cox, Ronald B; Criss, Michael M; Harrist, Amanda W; Zapata-Roblyer, Martha

    2017-10-01

    Most studies tend to characterize peer influences as either positive or negative. In a sample of 1815 youth from 14 different schools in Caracas, Venezuela, we explored how two types of peer affiliations (i.e., deviant and drug-using peers) differentially mediated the paths from positive parenting to youth's externalizing behavior and licit and illicit drug use. We used Zero Inflated Poisson models to test the probability of use and the extent of use during the past 12 months. Results suggested that peer influences are domain specific among Venezuelan youth. That is, deviant peer affiliations mediated the path from positive parenting to youth externalizing behaviors, and peer drug-using affiliations mediated the paths to the drug use outcomes. Mediation effects were partial, suggesting that parenting explained unique variance in the outcomes after accounting for both peer variables, gender, and age. We discuss implications for the development of screening tools and for prevention interventions targeting adolescents from different cultures.

  6. Cooperative Surveillance and Pursuit Using Unmanned Aerial Vehicles and Unattended Ground Sensors

    PubMed Central

    Las Fargeas, Jonathan; Kabamba, Pierre; Girard, Anouck

    2015-01-01

    This paper considers the problem of path planning for a team of unmanned aerial vehicles performing surveillance near a friendly base. The unmanned aerial vehicles do not possess sensors with automated target recognition capability and, thus, rely on communicating with unattended ground sensors placed on roads to detect and image potential intruders. The problem is motivated by persistent intelligence, surveillance, reconnaissance and base defense missions. The problem is formulated and shown to be intractable. A heuristic algorithm to coordinate the unmanned aerial vehicles during surveillance and pursuit is presented. Revisit deadlines are used to schedule the vehicles' paths nominally. The algorithm uses detections from the sensors to predict intruders' locations and selects the vehicles' paths by minimizing a linear combination of missed deadlines and the probability of not intercepting intruders. An analysis of the algorithm's completeness and complexity is then provided. The effectiveness of the heuristic is illustrated through simulations in a variety of scenarios. PMID:25591168

  7. Children's exercise behavior: the moderating role of habit processes within the theory of planned behavior.

    PubMed

    Hashim, H A; Jawis, M N; Wahat, A; Grove, J R

    2014-01-01

    The moderating effect of exercise habit strength and specific habit processes within the theory of planned behavior (TPB) was tested in children. Participants were primary school students (N = 380, mean age = 10.46 ± .52). The data were collected using self-report measures followed by one-mile run test performance. Data were analyzed using structural equation modeling. The findings revealed that 34, 57, and 9% of students could be classified as low, moderate, and high in PA, respectively. Path analysis for the overall model revealed significant path loadings (p = < .05), except for the attitude-intention path. Moderating effects results revealed that strong habit strength extinguished the effects of intention on PA. Habit strength has the potential to minimize the deliberate processes associated with intention to exercise, thereby increasing the probability of intention-behavior translation. For specific habit processes, only negative affect appears to moderate the relationships between the TPB variables.

  8. Generalized quantum interference of correlated photon pairs

    PubMed Central

    Kim, Heonoh; Lee, Sang Min; Moon, Han Seb

    2015-01-01

    Superposition and indistinguishablility between probability amplitudes have played an essential role in observing quantum interference effects of correlated photons. The Hong-Ou-Mandel interference and interferences of the path-entangled photon number state are of special interest in the field of quantum information technologies. However, a fully generalized two-photon quantum interferometric scheme accounting for the Hong-Ou-Mandel scheme and path-entangled photon number states has not yet been proposed. Here we report the experimental demonstrations of the generalized two-photon interferometry with both the interferometric properties of the Hong-Ou-Mandel effect and the fully unfolded version of the path-entangled photon number state using photon-pair sources, which are independently generated by spontaneous parametric down-conversion. Our experimental scheme explains two-photon interference fringes revealing single- and two-photon coherence properties in a single interferometer setup. Using the proposed interferometric measurement, it is possible to directly estimate the joint spectral intensity of a photon pair source. PMID:25951143

  9. Path length dependent neutron diffraction peak shifts observed during residual strain measurements in U–8 wt% Mo castings

    DOE PAGES

    Steiner, M. A.; Bunn, J. R.; Einhorn, J. R.; ...

    2017-05-16

    This study reports an angular diffraction peak shift that scales linearly with the neutron beam path length traveled through a diffracting sample. This shift was observed in the context of mapping the residual stress state of a large U–8 wt% Mo casting, as well as during complementary measurements on a smaller casting of the same material. If uncorrected, this peak shift implies a non-physical level of residual stress. A hypothesis for the origin of this shift is presented, based upon non-ideal focusing of the neutron monochromator in combination with changes to the wavelength distribution reaching the detector due to factorsmore » such as attenuation. The magnitude of the shift is observed to vary linearly with the width of the diffraction peak reaching the detector. Consideration of this shift will be important for strain measurements requiring long path lengths through samples with significant attenuation. This effect can probably be reduced by selecting smaller voxel slit widths.« less

  10. Viewing strategies for simple and chimeric faces: an investigation of perceptual bias in normals and schizophrenic patients using visual scan paths.

    PubMed

    Phillips, M L; David, A S

    1997-11-01

    Left hemi-face (LHF) perceptual bias of chimeric faces in normal right-handers is well-documented. We investigated mechanisms underlying this by measuring visual scan paths in right-handed normal controls (n = 9) and schizophrenics (n = 8) for simple, full-face photographs and schematic, happy-sad chimeric faces over 5 s. Normals viewed the left side/ LHF first, more so than the right of all stimuli. Schizophrenics viewed the LHF first more than the right of stimuli for which there was a LHF choice of predominant affect. Neither group demonstrated an overall LHF perceptual bias for the chimeric stimuli. Readjustment of the initial LHF bias in controls was probably a result of increased attention to stimulus detail with scanning, whereas the schizophrenics demonstrated difficulty in redirection of the initial focus of attention. The study highlights the role of visual scan paths as a marker of normal and abnormal attentional processes. Copyright 1997 Academic Press.

  11. Quadcopter Path Following Control Design Using Output Feedback with Command Generator Tracker LOS Based At Square Path

    NASA Astrophysics Data System (ADS)

    Nugraha, A. T.; Agustinah, T.

    2018-01-01

    Quadcopter an unstable system, underactuated and nonlinear in quadcopter control research developments become an important focus of attention. In this study, following the path control method for position on the X and Y axis, used structure-Generator Tracker Command (CGT) is tested. Attitude control and position feedback quadcopter is compared using the optimal output. The addition of the H∞ performance optimal output feedback control is used to maintain the stability and robustness of quadcopter. Iterative numerical techniques Linear Matrix Inequality (LMI) is used to find the gain controller. The following path control problems is solved using the method of LQ regulators with output feedback. Simulations show that the control system can follow the paths that have been defined in the form of a reference signal square shape. The result of the simulation suggest that the method which used can bring the yaw angle at the expected value algorithm. Quadcopter can do automatically following path with cross track error mean X=0.5 m and Y=0.2 m.

  12. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  13. NVIDIA OptiX ray-tracing engine as a new tool for modelling medical imaging systems

    NASA Astrophysics Data System (ADS)

    Pietrzak, Jakub; Kacperski, Krzysztof; Cieślar, Marek

    2015-03-01

    The most accurate technique to model the X- and gamma radiation path through a numerically defined object is the Monte Carlo simulation which follows single photons according to their interaction probabilities. A simplified and much faster approach, which just integrates total interaction probabilities along selected paths, is known as ray tracing. Both techniques are used in medical imaging for simulating real imaging systems and as projectors required in iterative tomographic reconstruction algorithms. These approaches are ready for massive parallel implementation e.g. on Graphics Processing Units (GPU), which can greatly accelerate the computation time at a relatively low cost. In this paper we describe the application of the NVIDIA OptiX ray-tracing engine, popular in professional graphics and rendering applications, as a new powerful tool for X- and gamma ray-tracing in medical imaging. It allows the implementation of a variety of physical interactions of rays with pixel-, mesh- or nurbs-based objects, and recording any required quantities, like path integrals, interaction sites, deposited energies, and others. Using the OptiX engine we have implemented a code for rapid Monte Carlo simulations of Single Photon Emission Computed Tomography (SPECT) imaging, as well as the ray-tracing projector, which can be used in reconstruction algorithms. The engine generates efficient, scalable and optimized GPU code, ready to run on multi GPU heterogeneous systems. We have compared the results our simulations with the GATE package. With the OptiX engine the computation time of a Monte Carlo simulation can be reduced from days to minutes.

  14. Comparison of micrometeorological methods using open-path optical instruments for measuring methane emission from agricultural sites

    USDA-ARS?s Scientific Manuscript database

    In this study, we evaluated the accuracies of two relatively new micrometeorological methods using open-path tunable diode laser absorption spectrometers: vertical radial plume mapping method (US EPA OTM-10) and the backward Lagragian stochastic method (Wintrax®). We have evaluated the accuracy of t...

  15. Evolutionistic or revolutionary paths? A PACS maturity model for strategic situational planning.

    PubMed

    van de Wetering, Rogier; Batenburg, Ronald; Lederman, Reeva

    2010-07-01

    While many hospitals are re-evaluating their current Picture Archiving and Communication System (PACS), few have a mature strategy for PACS deployment. Furthermore, strategies for implementation, strategic and situational planning methods for the evolution of PACS maturity are scarce in the scientific literature. Consequently, in this paper we propose a strategic planning method for PACS deployment. This method builds upon a PACS maturity model (PMM), based on the elaboration of the strategic alignment concept and the maturity growth path concept previously developed in the PACS domain. First, we review the literature on strategic planning for information systems and information technology and PACS maturity. Secondly, the PMM is extended by applying four different strategic perspectives of the Strategic Alignment Framework whereupon two types of growth paths (evolutionistic and revolutionary) are applied that focus on a roadmap for PMM. This roadmap builds a path to get from one level of maturity and evolve to the next. An extended method for PACS strategic planning is developed. This method defines eight distinctive strategies for PACS strategic situational planning that allow decision-makers in hospitals to decide which approach best suits their hospitals' current situation and future ambition and what in principle is needed to evolve through the different maturity levels. The proposed method allows hospitals to strategically plan for PACS maturation. It is situational in that the required investments and activities depend on the alignment between the hospital strategy and the selected growth path. The inclusion of both strategic alignment and maturity growth path concepts make the planning method rigorous, and provide a framework for further empirical research and clinical practice.

  16. Confidence level estimation in multi-target classification problems

    NASA Astrophysics Data System (ADS)

    Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia

    2018-04-01

    This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.

  17. Systems for column-based separations, methods of forming packed columns, and methods of purifying sample components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2000-01-01

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  18. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2006-02-21

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  19. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components.

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2004-08-24

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  20. AEDT sensor path methods using BADA4

    DOT National Transportation Integrated Search

    2017-06-01

    This report documents the development and use of sensor path data processing in the Federal Aviation Administration's (FAAs) Aviation Environmental Design Tool (AEDT). The methods are primarily intended to assist analysts with using AEDT to determ...

  1. Method and apparatus for dispensing compressed natural gas and liquified natural gas to natural gas powered vehicles

    DOEpatents

    Bingham, Dennis A.; Clark, Michael L.; Wilding, Bruce M.; Palmer, Gary L.

    2005-05-31

    A fueling facility and method for dispensing liquid natural gas (LNG), compressed natural gas (CNG) or both on-demand. The fueling facility may include a source of LNG, such as cryogenic storage vessel. A low volume high pressure pump is coupled to the source of LNG to produce a stream of pressurized LNG. The stream of pressurized LNG may be selectively directed through an LNG flow path or to a CNG flow path which includes a vaporizer configured to produce CNG from the pressurized LNG. A portion of the CNG may be drawn from the CNG flow path and introduced into the CNG flow path to control the temperature of LNG flowing therethrough. Similarly, a portion of the LNG may be drawn from the LNG flow path and introduced into the CNG flow path to control the temperature of CNG flowing therethrough.

  2. Teleconnection Paths via Climate Network Direct Link Detection.

    PubMed

    Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo

    2015-12-31

    Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.

  3. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  4. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE PAGES

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    2017-08-04

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  5. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.

    2012-06-05

    An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  6. Predictor laws for pictorial flight displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1985-01-01

    Two predictor laws are formulated and analyzed: (1) a circular path law based on constant accelerations perpendicular to the path and (2) a predictor law based on state transition matrix computations. It is shown that for both methods the predictor provides the essential lead zeros for the path-following task. However, in contrast to the circular path law, the state transition matrix law furnishes the system with additional zeros that entirely cancel out the higher-frequency poles of the vehicle dynamics. On the other hand, the circular path law yields a zero steady-state error in following a curved trajectory with a constant radius. A combined predictor law is suggested that utilizes the advantages of both methods. A simple analysis shows that the optimal prediction time mainly depends on the level of precision required in the path-following task, and guidelines for determining the optimal prediction time are given.

  7. Temperature distributions and thermal stresses in a graded zirconia/metal gas path seal system for aircraft gas turbine engines

    NASA Technical Reports Server (NTRS)

    Taylor, C. M.; Bill, R. C.

    1978-01-01

    A ceramic/metallic aircraft gas turbine outer gas path seal designed for improved engine performance was studied. Transient temperature and stress profiles in a test seal geometry were determined by numerical analysis. During a simulated engine deceleration cycle from sea-level takeoff to idle conditions, the maximum seal temperature occurred below the seal surface, therefore the top layer of the seal was probably subjected to tensile stresses exceeding the modulus of rupture. In the stress analysis both two- and three-dimensional finite element computer programs were used. Predicted trends of the simpler and more easily usable two-dimensional element programs were borne out by the three-dimensional finite element program results.

  8. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  9. Continuous quantum measurements and the action uncertainty principle

    NASA Astrophysics Data System (ADS)

    Mensky, Michael B.

    1992-09-01

    The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.

  10. Ultrasonic Phased Array Assessment of the Interference Fit and Leak Path of the North Anna Unit 2 Control Rod Drive Mechanism Nozzle 63 with Destructive Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Susan L.; Cinson, Anthony D.; MacFarlan, Paul J.

    2012-08-01

    The objective of this investigation was to evaluate the efficacy of ultrasonic testing (UT) for primary water leak path assessments of reactor pressure vessel (RPV) upper head penetrations. Operating reactors have experienced leakage when stress corrosion cracking of nickel-based alloy penetrations allowed primary water into the annulus of the interference fit between the penetration and the low-alloy steel RPV head. In this investigation, UT leak path data were acquired for an Alloy 600 control rod drive mechanism nozzle penetration, referred to as Nozzle 63, which was removed from the North Anna Unit 2 reactor when the RPV head was replacedmore » in 2002. In-service inspection prior to the head replacement indicated that Nozzle 63 had a probable leakage path through the interference fit region. Nozzle 63 was examined using a phased-array UT probe with a 5.0-MHz, eight-element annular array. Immersion data were acquired from the nozzle inner diameter surface. The UT data were interpreted by comparing to responses measured on a mockup penetration with known features. Following acquisition of the UT data, Nozzle 63 was destructively examined to determine if the features identified in the UT examination, including leakage paths and crystalline boric acid deposits, could be visually confirmed. Additional measurements of boric acid deposit thickness and low-alloy steel wastage were made to assess how these factors affect the UT response. The implications of these findings for interpreting UT leak path data are described.« less

  11. The Effect of Religious Belief on the Mental Health Status and Suicide Probability of Women Exposed to Violence.

    PubMed

    Güngörmüş, Zeynep; Tanrıverdi, Derya; Gündoğan, Tuğba

    2015-10-01

    It is known that violence against women is an important health problem both in the world and in Turkey (World Health Organization 2005; General Directorate on the Status of Women 2008). Religion is an important factor in preventing suicide and mental disorders by increasing one's ability to cope with events, channeling his/her perspective on life and the future toward a more positive path satisfying people about topics such as the need to be safe, the need for meaning and the reason for creation (Altuntop 2005). Hence, the objective of our study was to determine the effects of religious belief on the mental health status and suicide probabilities of women exposed to violence in Turkey. The study used a descriptive design. The study sample consisted of 135 women who have suffered violence who were consecutively admitted to the Department of Emergency of a State Hospital due to exposed to violence. They entered the study based on their acceptance to the questionnaire. The belief levels of women are based on their own statements and they are all Muslims. The data were collected using a questionnaire form, the Suicide Probability Scale and Brief Symptom Inventory. The data were analyzed using SPSS version 18.0. Statistical analyses were used percentage calculation, chi-square and Kruskal-Wallis test. In conclusion, a negative relationship was determined between the religious belief levels of women exposed to violence in Turkey and their moods and suicide probabilities. Hence, nurses who can stay alone with women for long periods of time can provide advancement in the determination and prevention of suicides decreasing depression via specific methods and overcoming hopelessness.

  12. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    NASA Technical Reports Server (NTRS)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  13. Investigation on imperfection sensitivity of composite cylindrical shells using the nonlinearity reduction technique and the polynomial chaos method

    NASA Astrophysics Data System (ADS)

    Liang, Ke; Sun, Qin; Liu, Xiaoran

    2018-05-01

    The theoretical buckling load of a perfect cylinder must be reduced by a knock-down factor to account for structural imperfections. The EU project DESICOS proposed a new robust design for imperfection-sensitive composite cylindrical shells using the combination of deterministic and stochastic simulations, however the high computational complexity seriously affects its wider application in aerospace structures design. In this paper, the nonlinearity reduction technique and the polynomial chaos method are implemented into the robust design process, to significantly lower computational costs. The modified Newton-type Koiter-Newton approach which largely reduces the number of degrees of freedom in the nonlinear finite element model, serves as the nonlinear buckling solver to trace the equilibrium paths of geometrically nonlinear structures efficiently. The non-intrusive polynomial chaos method provides the buckling load with an approximate chaos response surface with respect to imperfections and uses buckling solver codes as black boxes. A fast large-sample study can be applied using the approximate chaos response surface to achieve probability characteristics of buckling loads. The performance of the method in terms of reliability, accuracy and computational effort is demonstrated with an unstiffened CFRP cylinder.

  14. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  15. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  16. Remote atmospheric probing by ground to ground line of sight optical methods

    NASA Technical Reports Server (NTRS)

    Lawrence, R. S.

    1969-01-01

    The optical effects arising from refractive-index variations in the clear air are qualitatively described, and the possibilities are discussed of using those effects for remotely sensing the physical properties of the atmosphere. The effects include scintillations, path length fluctuations, spreading of a laser beam, deflection of the beam, and depolarization. The physical properties that may be measured include the average temperature along the path, the vertical temperature gradient, and the distribution along the path of the strength of turbulence and the transverse wind velocity. Line-of-sight laser beam methods are clearly effective in measuring the average properties, but less effective in measuring distributions along the path. Fundamental limitations to the resolution are pointed out and experiments are recommended to investigate the practicality of the methods.

  17. Arctic curves in path models from the tangent method

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  18. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators

    PubMed Central

    Dosdall, Derek J; Sweeney, James D

    2008-01-01

    Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561

  19. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  20. Orbit determination error analysis and comparison of station-keeping costs for Lissajous and halo-type libration point orbits and sensitivity analysis using experimental design techniques

    NASA Technical Reports Server (NTRS)

    Gordon, Steven C.

    1993-01-01

    Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.

  1. Tumor-Cut: segmentation of brain tumors on contrast enhanced MR images for radiosurgery applications.

    PubMed

    Hamamci, Andac; Kucuk, Nadir; Karaman, Kutlay; Engin, Kayihan; Unal, Gozde

    2012-03-01

    In this paper, we present a fast and robust practical tool for segmentation of solid tumors with minimal user interaction to assist clinicians and researchers in radiosurgery planning and assessment of the response to the therapy. Particularly, a cellular automata (CA) based seeded tumor segmentation method on contrast enhanced T1 weighted magnetic resonance (MR) images, which standardizes the volume of interest (VOI) and seed selection, is proposed. First, we establish the connection of the CA-based segmentation to the graph-theoretic methods to show that the iterative CA framework solves the shortest path problem. In that regard, we modify the state transition function of the CA to calculate the exact shortest path solution. Furthermore, a sensitivity parameter is introduced to adapt to the heterogeneous tumor segmentation problem, and an implicit level set surface is evolved on a tumor probability map constructed from CA states to impose spatial smoothness. Sufficient information to initialize the algorithm is gathered from the user simply by a line drawn on the maximum diameter of the tumor, in line with the clinical practice. Furthermore, an algorithm based on CA is presented to differentiate necrotic and enhancing tumor tissue content, which gains importance for a detailed assessment of radiation therapy response. Validation studies on both clinical and synthetic brain tumor datasets demonstrate 80%-90% overlap performance of the proposed algorithm with an emphasis on less sensitivity to seed initialization, robustness with respect to different and heterogeneous tumor types, and its efficiency in terms of computation time.

  2. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  3. Kinematic state estimation and motion planning for stochastic nonholonomic systems using the exponential map

    PubMed Central

    Park, Wooram; Liu, Yan; Zhou, Yu; Moses, Matthew; Chirikjian, Gregory S.

    2010-01-01

    SUMMARY A nonholonomic system subjected to external noise from the environment, or internal noise in its own actuators, will evolve in a stochastic manner described by an ensemble of trajectories. This ensemble of trajectories is equivalent to the solution of a Fokker–Planck equation that typically evolves on a Lie group. If the most likely state of such a system is to be estimated, and plans for subsequent motions from the current state are to be made so as to move the system to a desired state with high probability, then modeling how the probability density of the system evolves is critical. Methods for solving Fokker-Planck equations that evolve on Lie groups then become important. Such equations can be solved using the operational properties of group Fourier transforms in which irreducible unitary representation (IUR) matrices play a critical role. Therefore, we develop a simple approach for the numerical approximation of all the IUR matrices for two of the groups of most interest in robotics: the rotation group in three-dimensional space, SO(3), and the Euclidean motion group of the plane, SE(2). This approach uses the exponential mapping from the Lie algebras of these groups, and takes advantage of the sparse nature of the Lie algebra representation matrices. Other techniques for density estimation on groups are also explored. The computed densities are applied in the context of probabilistic path planning for kinematic cart in the plane and flexible needle steering in three-dimensional space. In these examples the injection of artificial noise into the computational models (rather than noise in the actual physical systems) serves as a tool to search the configuration spaces and plan paths. Finally, we illustrate how density estimation problems arise in the characterization of physical noise in orientational sensors such as gyroscopes. PMID:20454468

  4. Virtual animation of victim-specific 3D models obtained from CT scans for forensic reconstructions: Living and dead subjects.

    PubMed

    Villa, C; Olsen, K B; Hansen, S H

    2017-09-01

    Post-mortem CT scanning (PMCT) has been introduced at several forensic medical institutions many years ago and has proved to be a useful tool. 3D models of bones, skin, internal organs and bullet paths can rapidly be generated using post-processing software. These 3D models reflect the individual physiognomics and can be used to create whole-body 3D virtual animations. In such way, virtual reconstructions of the probable ante-mortem postures of victims can be constructed and contribute to understand the sequence of events. This procedure is demonstrated in two victims of gunshot injuries. Case #1 was a man showing three perforating gunshot wounds, who died due to the injuries of the incident. Whole-body PMCT was performed and 3D reconstructions of bones, relevant internal organs and bullet paths were generated. Using 3ds Max software and a human anatomy 3D model, a virtual animated body was built and probable ante-mortem postures visualized. Case #2 was a man presenting three perforating gunshot wounds, who survived the incident: one in the left arm and two in the thorax. Only CT scans of the thorax, abdomen and the injured arm were provided by the hospital. Therefore, a whole-body 3D model reflecting the anatomical proportions of the patient was made combining the actual bones of the victim with those obtained from the human anatomy 3D model. The resulted 3D model was used for the animation process. Several probable postures were also visualized in this case. It has be shown that in Case #1 the lesions and the bullet path were not consistent with an upright standing position; instead, the victim was slightly bent forward, i.e. he was sitting or running when he was shot. In Case #2, one of the bullets could have passed through the arm and continued into the thorax. In conclusion, specialized 3D modelling and animation techniques allow for the reconstruction of ante-mortem postures based on both PMCT and clinical CT. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    PubMed

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  6. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  7. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  8. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  9. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  10. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  11. Study on Unit Cell Models and the Effective Thermal Conductivities of Silica Aerogel.

    PubMed

    Liu, He; Li, Zeng-Yao; Zhao, Xin-Peng; Tao, Wen-Quan

    2015-04-01

    In this paper, two modified unit cell models, truncated octahedron and cubic array of intersecting square rods with 45-degree rotation, are developed in consideration of the tortuous path of heat conduction in solid skeleton of silica aerogel. The heat conduction is analyzed for each model and the expressions of effective thermal conductivity of the modified unit cell models are derived. Considering the random microstructure of silica aerogel, the probability model is presented. We also discuss the effect of the thermal conductivity of aerogel backbone. The effective thermal conductivities calculated by the proposed probability model are in good agreement with available experimental data when the density of the aerogel is 110 kg/m3.

  12. Blocking probability in the hose-model optical VPN with different number of wavelengths

    NASA Astrophysics Data System (ADS)

    Roslyakov, Alexander V.

    2017-04-01

    Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.

  13. Clear-Sky Probability for the August 21, 2017, Total Solar Eclipse Using the NREL National Solar Radiation Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Roberts, Billy J; Kutchenreiter, Mark C

    The National Renewable Energy Laboratory (NREL) and collaborators have created a clear-sky probability analysis to help guide viewers of the August 21, 2017, total solar eclipse, the first continent-spanning eclipse in nearly 100 years in the United States. Using cloud and solar data from NREL's National Solar Radiation Database (NSRDB), the analysis provides cloudless sky probabilities specific to the date and time of the eclipse. Although this paper is not intended to be an eclipse weather forecast, the detailed maps can help guide eclipse enthusiasts to likely optimal viewing locations. Additionally, high-resolution data are presented for the centerline of themore » path of totality, representing the likelihood for cloudless skies and atmospheric clarity. The NSRDB provides industry, academia, and other stakeholders with high-resolution solar irradiance data to support feasibility analyses for photovoltaic and concentrating solar power generation projects.« less

  14. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  15. Application of Probabilistic Methods to Assess Risk Due to Resonance in the Design of J-2X Rocket Engine Turbine Blades

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; DeHaye, Michael; DeLessio, Steven

    2011-01-01

    The LOX-Hydrogen J-2X Rocket Engine, which is proposed for use as an upper-stage engine for numerous earth-to-orbit and heavy lift launch vehicle architectures, is presently in the design phase and will move shortly to the initial development test phase. Analysis of the design has revealed numerous potential resonance issues with hardware in the turbomachinery turbine-side flow-path. The analysis of the fuel pump turbine blades requires particular care because resonant failure of the blades, which are rotating in excess of 30,000 revolutions/minutes (RPM), could be catastrophic for the engine and the entire launch vehicle. This paper describes a series of probabilistic analyses performed to assess the risk of failure of the turbine blades due to resonant vibration during past and present test series. Some significant results are that the probability of failure during a single complete engine hot-fire test is low (1%) because of the small likelihood of resonance, but that the probability increases to around 30% for a more focused turbomachinery-only test because all speeds will be ramped through and there is a greater likelihood of dwelling at more speeds. These risk calculations have been invaluable for use by program management in deciding if risk-reduction methods such as dampers are necessary immediately or if the test can be performed before the risk-reduction hardware is ready.

  16. Optimization of magnet end-winding geometry

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.

    1994-03-01

    A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.

  17. Unified path integral approach to theories of diffusion-influenced reactions

    NASA Astrophysics Data System (ADS)

    Prüstel, Thorsten; Meier-Schellersheim, Martin

    2017-08-01

    Building on mathematical similarities between quantum mechanics and theories of diffusion-influenced reactions, we develop a general approach for computational modeling of diffusion-influenced reactions that is capable of capturing not only the classical Smoluchowski picture but also alternative theories, as is here exemplified by a volume reactivity model. In particular, we prove the path decomposition expansion of various Green's functions describing the irreversible and reversible reaction of an isolated pair of molecules. To this end, we exploit a connection between boundary value and interaction potential problems with δ - and δ'-function perturbation. We employ a known path-integral-based summation of a perturbation series to derive a number of exact identities relating propagators and survival probabilities satisfying different boundary conditions in a unified and systematic manner. Furthermore, we show how the path decomposition expansion represents the propagator as a product of three factors in the Laplace domain that correspond to quantities figuring prominently in stochastic spatially resolved simulation algorithms. This analysis will thus be useful for the interpretation of current and the design of future algorithms. Finally, we discuss the relation between the general approach and the theory of Brownian functionals and calculate the mean residence time for the case of irreversible and reversible reactions.

  18. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  19. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  20. Network of dedicated processors for finding lowest-cost map path

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    A method and associated apparatus are disclosed for finding the lowest cost path of several variable paths. The paths are comprised of a plurality of linked cost-incurring areas existing between an origin point and a destination point. The method comprises the steps of connecting a purality of nodes together in the manner of the cost-incurring areas; programming each node to have a cost associated therewith corresponding to one of the cost-incurring areas; injecting a signal into one of the nodes representing the origin point; propagating the signal through the plurality of nodes from inputs to outputs; reducing the signal in magnitude at each node as a function of the respective cost of the node; and, starting at one of the nodes representing the destination point and following a path having the least reduction in magnitude of the signal from node to node back to one of the nodes representing the origin point whereby the lowest cost path from the origin point to the destination point is found.

  1. Compensation of high order harmonic long quantum-path attosecond chirp

    NASA Astrophysics Data System (ADS)

    Guichard, R.; Caillat, J.; Lévêque, C.; Risoud, F.; Maquet, A.; Taïeb, R.; Zaïr, A.

    2017-12-01

    We propose a method to compensate for the extreme ultra violet (XUV) attosecond chirp associated with the long quantum-path in the high harmonic generation process. Our method employs an isolated attosecond pulse (IAP) issued from the short trajectory contribution in a primary target to assist the infrared driving field to produce high harmonics from the long trajectory in a secondary target. In our simulations based on the resolution of the time-dependent Schrödinger equation, the resulting high harmornics present a clear phase compensation of the long quantum-path contribution, near to Fourier transform limited attosecond XUV pulse. Employing time-frequency analysis of the high harmonic dipole, we found that the compensation is not a simple far-field photonic interference between the IAP and the long-path harmonic emission, but a coherent phase transfer from the weak IAP to the long quantum-path electronic wavepacket. Our approach opens the route to utilizing the long quantum-path for the production and applications of attosecond pulses.

  2. A variational dynamic programming approach to robot-path planning with a distance-safety criterion

    NASA Technical Reports Server (NTRS)

    Suh, Suk-Hwan; Shin, Kang G.

    1988-01-01

    An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.

  3. Paleohydrology of the southern Great Basin, with special reference to water table fluctuations beneath the Nevada Test Site during the late(?) Pleistocene

    USGS Publications Warehouse

    Winograd, Isaac Judah; Doty, Gene C.

    1980-01-01

    Knowledge of the magnitude of water-table rise during Pleistocene pluvial climates, and of the resultant shortening of groundwater flow path and reduction in unsaturated zone thickness, is mandatory for a technical evaluation of the Nevada Test Site (NTS) or other arid zone sites as repositories for high-level or transuranic radioactive wastes. The distribution of calcitic veins filling fractures in alluvium, and of tufa deposits between the Ash Meadows spring discharge area and the Nevada Test Site indicates that discharge from the regional Paleozoic carbonate aquifer during the Late( ) Pleistocene pluvial periods may have occurred at an altitude about 50 meters higher than at present and 14 kilometers northeast of Ash Meadows. Use of the underflow equation (relating discharge to transmissivity, aquifer width, and hydraulic gradient), and various assumptions regarding pluvial recharge, transmissivity, and altitude of groundwater base level, suggest possible rises in potentiometric level in the carbonate aquifer of about -90 meters beneath central Frenchman Flat. During Wisconsin time the rise probably did not exceed 30 meters. Water-level rises beneath Frenchman Flat during future pluvials are unlikely to exceed 30 meters and might even be 10 meters lower than modern levels. Neither the cited rise in potentiometric level in the regional carbonate aquifer, nor the shortened flow path during the Late( ) Pleistocene preclude utilization of the NTS as a repository for high-level or transuranic-element radioactive wastes provided other requisite conditions are met as this site. Deep water tables, attendant thick (up to several hundred meter) unsaturated zones, and long groundwater flow paths characterized the region during the Wisconsin Stage and probably throughout the Pleistocene Epoch and are likely to so characterize it during future glacial periods. (USGS)

  4. Computational Role of Tunneling in a Programmable Quantum Annealer

    NASA Technical Reports Server (NTRS)

    Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut

    2016-01-01

    Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.

  5. Numerical simulations of the spread of floating passive tracer released at the Old Harry prospect

    NASA Astrophysics Data System (ADS)

    Bourgault, Daniel; Cyr, Frédéric; Dumont, Dany; Carter, Angela

    2014-05-01

    The Gulf of St Lawrence is under immediate pressure for oil and gas exploration, particularly at the Old Harry prospect. A synthesis of the regulatory process that has taken place over the last few years indicates that important societal decisions soon to be made by various ministries and environmental groups are going to be based on numerous disagreements between the private sector and government agencies. The review also shows that the regulatory process has taken place with a complete lack of independent oceanographic research. Yet, the Gulf of St Lawrence is a complex environment that has never been specifically studied for oil and gas exploitation. Motivated by this knowledge gap, preliminary numerical experiments are carried out where the spreading of a passive floating tracer released at Old Harry is examined. Results indicate that the tracer released at Old Harry may follow preferentially two main paths. The first path is northward along the French Shore of Newfoundland, and the second path is along the main axis of the Laurentian Channel. The most probable coastlines to be touched by water flowing through Old Harry are Cape Breton and the southern portion of the French Shore, especially Cape Anguille and the Port au Port Peninsula. The Magdalen Islands are less susceptible to being affected than those regions but the probability is not negligible. These preliminary results provide guidance for future more in-depth and complete multidisciplinary studies from which informed decision-making scenarios could eventually be made regarding the exploration and development of oil and gas at the Old Harry prospect in particular and, more generally, in the Gulf of St Lawrence.

  6. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  7. Dynamics of magnetic shells and information loss problem

    NASA Astrophysics Data System (ADS)

    Lee, Bum-Hoon; Lee, Wonwoo; Yeom, Dong-han

    2015-07-01

    We investigate dynamics of magnetic thin-shells in three dimensional anti-de Sitter background. Because of the magnetic field, an oscillatory solution is possible. This oscillating shell can tunnel to a collapsing shell or a bouncing shell, where both tunnelings induce an event horizon and a singularity. In the entire path integral, via the oscillating solution, there is a nonzero probability to maintain a trivial causal structure without a singularity. Therefore, due to the path integral, the entire wave function can conserve information. Since an oscillating shell can tunnel after a number of oscillations, in the end, it will allow an infinite number of different branchings to classical histories. This system can be a good model of the effective loss of information, where information is conserved by a solution that is originated from gauge fields.

  8. Discrete-Event-Dynamic-System-Based Approaches for Control in Integrated Voice/Data Multihop Radio Networks.

    DTIC Science & Technology

    1994-12-07

    set Ci is such that i C_ A and, in general, Ci n Cj 0 for i # j, The importance of the distinction n o ýt a av1 itIam i; the t Ulm hIt I 9B i l 16 a...be the M-dimensional slot assignment probability vector [01, . ., ’iM] T and Wi(0) as the expected node i waiting time. Our objective is to determine...Nominal Sample Path BP#m BP # (m+l) I I I! I I II * I II ,m A2,m :A3,m VIl m T l m T2,m 13 V3,m Figure 2b - (2,m) Phantom Slot Sample Path 8 3 A Schedule

  9. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  10. Cassette less SOFC stack and method of assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhardt, Kerry D

    2014-11-18

    A cassette less SOFC assembly and a method for creating such an assembly. The SOFC stack is characterized by an electrically isolated stack current path which allows welded interconnection between frame portions of the stack. In one embodiment electrically isolating a current path comprises the step of sealing a interconnect plate to a interconnect plate frame with an insulating seal. This enables the current path portion to be isolated from the structural frame an enables the cell frame to be welded together.

  11. Looping probabilities of elastic chains: a path integral approach.

    PubMed

    Cotta-Ramusino, Ludovica; Maddocks, John H

    2010-11-01

    We consider an elastic chain at thermodynamic equilibrium with a heat bath, and derive an approximation to the probability density function, or pdf, governing the relative location and orientation of the two ends of the chain. Our motivation is to exploit continuum mechanics models for the computation of DNA looping probabilities, but here we focus on explaining the novel analytical aspects in the derivation of our approximation formula. Accordingly, and for simplicity, the current presentation is limited to the illustrative case of planar configurations. A path integral formalism is adopted, and, in the standard way, the first approximation to the looping pdf is obtained from a minimal energy configuration satisfying prescribed end conditions. Then we compute an additional factor in the pdf which encompasses the contributions of quadratic fluctuations about the minimum energy configuration along with a simultaneous evaluation of the partition function. The original aspects of our analysis are twofold. First, the quadratic Lagrangian describing the fluctuations has cross-terms that are linear in first derivatives. This, seemingly small, deviation from the structure of standard path integral examples complicates the necessary analysis significantly. Nevertheless, after a nonlinear change of variable of Riccati type, we show that the correction factor to the pdf can still be evaluated in terms of the solution to an initial value problem for the linear system of Jacobi ordinary differential equations associated with the second variation. The second novel aspect of our analysis is that we show that the Hamiltonian form of these linear Jacobi equations still provides the appropriate correction term in the inextensible, unshearable limit that is commonly adopted in polymer physics models of, e.g. DNA. Prior analyses of the inextensible case have had to introduce nonlinear and nonlocal integral constraints to express conditions on the relative displacement of the end points. Our approximation formula for the looping pdf is of quite general applicability as, in contrast to most prior approaches, no assumption is made of either uniformity of the elastic chain, nor of a straight intrinsic shape. If the chain is uniform the Jacobi system evaluated at certain minimum energy configurations has constant coefficients. In such cases our approximate pdf can be evaluated in an entirely explicit, closed form. We illustrate our analysis with a planar example of this type and compute an approximate probability of cyclization, i.e., of forming a closed loop, from a uniform elastic chain whose intrinsic shape is an open circular arc.

  12. Aerosol mass spectrometry systems and methods

    DOEpatents

    Fergenson, David P.; Gard, Eric E.

    2013-08-20

    A system according to one embodiment includes a particle accelerator that directs a succession of polydisperse aerosol particles along a predetermined particle path; multiple tracking lasers for generating beams of light across the particle path; an optical detector positioned adjacent the particle path for detecting impingement of the beams of light on individual particles; a desorption laser for generating a beam of desorbing light across the particle path about coaxial with a beam of light produced by one of the tracking lasers; and a controller, responsive to detection of a signal produced by the optical detector, that controls the desorption laser to generate the beam of desorbing light. Additional systems and methods are also disclosed.

  13. Robotics virtual rail system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  14. Velocity Inversion In Cylindrical Couette Gas Flows

    NASA Astrophysics Data System (ADS)

    Dongari, Nishanth; Barber, Robert W.; Emerson, David R.; Zhang, Yonghao; Reese, Jason M.

    2012-05-01

    We investigate a power-law probability distribution function to describe the mean free path of rarefied gas molecules in non-planar geometries. A new curvature-dependent model is derived by taking into account the boundary-limiting effects on the molecular mean free path for surfaces with both convex and concave curvatures. In comparison to a planar wall, we find that the mean free path for a convex surface is higher at the wall and exhibits a sharper gradient within the Knudsen layer. In contrast, a concave wall exhibits a lower mean free path near the surface and the gradients in the Knudsen layer are shallower. The Navier-Stokes constitutive relations and velocity-slip boundary conditions are modified based on a power-law scaling to describe the mean free path, in accordance with the kinetic theory of gases, i.e. transport properties can be described in terms of the mean free path. Velocity profiles for isothermal cylindrical Couette flow are obtained using the power-law model. We demonstrate that our model is more accurate than the classical slip solution, especially in the transition regime, and we are able to capture important non-linear trends associated with the non-equilibrium physics of the Knudsen layer. In addition, we establish a new criterion for the critical accommodation coefficient that leads to the non-intuitive phenomena of velocity-inversion. Our results are compared with conventional hydrodynamic models and direct simulation Monte Carlo data. The power-law model predicts that the critical accommodation coefficient is significantly lower than that calculated using the classical slip solution and is in good agreement with available DSMC data. Our proposed constitutive scaling for non-planar surfaces is based on simple physical arguments and can be readily implemented in conventional fluid dynamics codes for arbitrary geometric configurations.

  15. Feynman formulae and phase space Feynman path integrals for tau-quantization of some Lévy-Khintchine type Hamilton functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butko, Yana A., E-mail: yanabutko@yandex.ru, E-mail: kinderknecht@math.uni-sb.de; Grothaus, Martin, E-mail: grothaus@mathematik.uni-kl.de; Smolyanov, Oleg G., E-mail: Smolyanov@yandex.ru

    2016-02-15

    Evolution semigroups generated by pseudo-differential operators are considered. These operators are obtained by different (parameterized by a number τ) procedures of quantization from a certain class of functions (or symbols) defined on the phase space. This class contains Hamilton functions of particles with variable mass in magnetic and potential fields and more general symbols given by the Lévy-Khintchine formula. The considered semigroups are represented as limits of n-fold iterated integrals when n tends to infinity. Such representations are called Feynman formulae. Some of these representations are constructed with the help of another pseudo-differential operator, obtained by the same procedure ofmore » quantization; such representations are called Hamiltonian Feynman formulae. Some representations are based on integral operators with elementary kernels; these are called Lagrangian Feynman formulae. Langrangian Feynman formulae provide approximations of evolution semigroups, suitable for direct computations and numerical modeling of the corresponding dynamics. Hamiltonian Feynman formulae allow to represent the considered semigroups by means of Feynman path integrals. In the article, a family of phase space Feynman pseudomeasures corresponding to different procedures of quantization is introduced. The considered evolution semigroups are represented as phase space Feynman path integrals with respect to these Feynman pseudomeasures, i.e., different quantizations correspond to Feynman path integrals with the same integrand but with respect to different pseudomeasures. This answers Berezin’s problem of distinguishing a procedure of quantization on the language of Feynman path integrals. Moreover, the obtained Lagrangian Feynman formulae allow also to calculate these phase space Feynman path integrals and to connect them with some functional integrals with respect to probability measures.« less

  16. Uncertainty, variability, and earthquake physics in ground‐motion prediction equations

    USGS Publications Warehouse

    Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.

    2017-01-01

    Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20  km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.

  17. State-specific tunneling lifetimes from classical trajectories: H-atom dissociation in electronically excited pyrrole

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiwei; Domcke, Wolfgang; Farantos, Stavros C.

    A trajectory method of calculating tunneling probabilities from phase integrals along straight line tunneling paths, originally suggested by Makri and Miller [J. Chem. Phys. 91, 4026 (1989)] and recently implemented by Truhlar and co-workers [Chem. Sci. 5, 2091 (2014)], is tested for one- and two-dimensional ab initio based potentials describing hydrogen dissociation in the {sup 1}B{sub 1} excited electronic state of pyrrole. The primary observables are the tunneling rates in a progression of bending vibrational states lying below the dissociation barrier and their isotope dependences. Several initial ensembles of classical trajectories have been considered, corresponding to the quasiclassical and themore » quantum mechanical samplings of the initial conditions. It is found that the sampling based on the fixed energy Wigner density gives the best agreement with the quantum mechanical dissociation rates.« less

  18. Unsupervised Calculation of Free Energy Barriers in Large Crystalline Systems

    NASA Astrophysics Data System (ADS)

    Swinburne, Thomas D.; Marinica, Mihai-Cosmin

    2018-03-01

    The calculation of free energy differences for thermally activated mechanisms in the solid state are routinely hindered by the inability to define a set of collective variable functions that accurately describe the mechanism under study. Even when possible, the requirement of descriptors for each mechanism under study prevents implementation of free energy calculations in the growing range of automated material simulation schemes. We provide a solution, deriving a path-based, exact expression for free energy differences in the solid state which does not require a converged reaction pathway, collective variable functions, Gram matrix evaluations, or probability flux-based estimators. The generality and efficiency of our method is demonstrated on a complex transformation of C 15 interstitial defects in iron and double kink nucleation on a screw dislocation in tungsten, the latter system consisting of more than 120 000 atoms. Both cases exhibit significant anharmonicity under experimentally relevant temperatures.

  19. Dissolution Dynamic Nuclear Polarization capability study with fluid path

    NASA Astrophysics Data System (ADS)

    Malinowski, Ronja M.; Lipsø, Kasper W.; Lerche, Mathilde H.; Ardenkjær-Larsen, Jan H.

    2016-11-01

    Signal enhancement by hyperpolarization is a way of overcoming the low sensitivity in magnetic resonance; MRI in particular. One of the most well-known methods, dissolution Dynamic Nuclear Polarization, has been used clinically in cancer patients. One way of ensuring a low bioburden of the hyperpolarized product is by use of a closed fluid path that constitutes a barrier to contamination. The fluid path can be filled with the pharmaceuticals, i.e. imaging agent and solvents, in a clean room, and then stored or immediately used at the polarizer. In this study, we present a method of filling the fluid path that allows it to be reused. The filling method has been investigated in terms of reproducibility at two extrema, high dose for patient use and low dose for rodent studies, using [1-13C]pyruvate as example. We demonstrate that the filling method allows high reproducibility of six quality control parameters with standard deviations 3-10 times smaller than the acceptance criteria intervals in clinical studies.

  20. Designing the Alluvial Riverbeds in Curved Paths

    NASA Astrophysics Data System (ADS)

    Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina

    2017-10-01

    The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.

  1. Graph drawing using tabu search coupled with path relinking.

    PubMed

    Dib, Fadi K; Rodgers, Peter

    2018-01-01

    Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function's value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset.

  2. Graph drawing using tabu search coupled with path relinking

    PubMed Central

    Rodgers, Peter

    2018-01-01

    Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function’s value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset. PMID:29746576

  3. Eigenvector centrality for geometric and topological characterization of porous media

    NASA Astrophysics Data System (ADS)

    Jimenez-Martinez, Joaquin; Negre, Christian F. A.

    2017-07-01

    Solving flow and transport through complex geometries such as porous media is computationally difficult. Such calculations usually involve the solution of a system of discretized differential equations, which could lead to extreme computational cost depending on the size of the domain and the accuracy of the model. Geometric simplifications like pore networks, where the pores are represented by nodes and the pore throats by edges connecting pores, have been proposed. These models, despite their ability to preserve the connectivity of the medium, have difficulties capturing preferential paths (high velocity) and stagnation zones (low velocity), as they do not consider the specific relations between nodes. Nonetheless, network theory approaches, where a complex network is a graph, can help to simplify and better understand fluid dynamics and transport in porous media. Here we present an alternative method to address these issues based on eigenvector centrality, which has been corrected to overcome the centralization problem and modified to introduce a bias in the centrality distribution along a particular direction to address the flow and transport anisotropy in porous media. We compare the model predictions with millifluidic transport experiments, which shows that, albeit simple, this technique is computationally efficient and has potential for predicting preferential paths and stagnation zones for flow and transport in porous media. We propose to use the eigenvector centrality probability distribution to compute the entropy as an indicator of the "mixing capacity" of the system.

  4. Emerging technology in fiber optic sensors

    NASA Astrophysics Data System (ADS)

    Dyott, Richard B.

    1991-03-01

    Some recent innovations in interferoinetric fiber optic sensors include special fibers new components and sensor systems. Many of the concepts have precedents in microwaves. 1. GENERAL PRINCIPLES The application of optical fibers to sensors is diffuse compared with their application to optical communications which is essentially focused on the single problem of how to get information from A to B. A fiber sensor is viable when it can do something not possible with better than more cheaply than any existing method. The probability of the emergence of a new sensor depends on the length of time that a need for the sensor and the possibility of meeting that need have co-existed regardless of whether the need or the possibility has appeared first. 2. TYPES OF SENSOR Fiber sensors can be divided into: a) Multimode fiber sensors which depend on amplitude effects b) Single mode (single path) fiber sensors which depend on phase effects. Since multimode fiber has existed for many decades the emergence of a new multimode sensor depends mostly on the discovery of a new need for such a sensor. On the other hand single mode/single path (i. e. polarization maintaining) fiber is relatively new and so is still being applied to existing needs. This is particularly so of recent innovations in fibers and components. SPIE Vol. 1396 Applications of Optical Engineering Proceedings of OE/Midwest ''90 / 709

  5. Theory of Disk-to-Vesicle Transformation

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Shi, An-Chang

    2009-03-01

    Self-assembled membranes from amphiphilic molecules, such as lipids and block copolymers, can assume a variety of morphologies dictated by energy minimization of system. The membrane energy is characterized by a bending modulus (κ), a Gaussian modulus (κG), and the line tension (γ) of the edge. Two basic morphologies of membranes are flat disks that minimize the bending energy at the cost of the edge energy, and enclosed vesicles that minimize the edge energy at the cost of bending energy. In our work, the transition from disk to vesicle is studied theoretically using the string method, which is designed to find the minimum energy path (MEP) or the most probable transition path between two local minima of an energy landscape. Previous studies of disk-to-vesicle transition usually approximate the transitional states by a series of spherical cups, and found that the spherical cups do not correspond to stable or meta-stable states of the system. Our calculation demonstrates that the intermediate shapes along the MEP are very different from spherical cups. Furthermore, some of these transitional states can be meta-stable. The disk-to-vesicle transition pathways are governed by two scaled parameters, κG/κ and γR0/4κ, where R0 is the radius of the disk. In particular, a meta-stable intermediate state is predicted, which may correspond to the open morphologies observed in experiments and simulations.

  6. Trading Robustness Requirements in Mars Entry Trajectory Design

    NASA Technical Reports Server (NTRS)

    Lafleur, Jarret M.

    2009-01-01

    One of the most important metrics characterizing an atmospheric entry trajectory in preliminary design is the size of its predicted landing ellipse. Often, requirements for this ellipse are set early in design and significantly influence both the expected scientific return from a particular mission and the cost of development. Requirements typically specify a certain probability level (6-level) for the prescribed ellipse, and frequently this latter requirement is taken at 36. However, searches for the justification of 36 as a robustness requirement suggest it is an empirical rule of thumb borrowed from non-aerospace fields. This paper presents an investigation into the sensitivity of trajectory performance to varying robustness (6-level) requirements. The treatment of robustness as a distinct objective is discussed, and an analysis framework is presented involving the manipulation of design variables to effect trades between performance and robustness objectives. The scenario for which this method is illustrated is the ballistic entry of an MSL-class Mars entry vehicle. Here, the design variable is entry flight path angle, and objectives are parachute deploy altitude performance and error ellipse robustness. Resulting plots show the sensitivities between these objectives and trends in the entry flight path angles required to design to these objectives. Relevance to the trajectory designer is discussed, as are potential steps for further development and use of this type of analysis.

  7. An improved approximate network blocking probability model for all-optical WDM Networks with heterogeneous link capacities

    NASA Astrophysics Data System (ADS)

    Khan, Akhtar Nawaz

    2017-11-01

    Currently, analytical models are used to compute approximate blocking probabilities in opaque and all-optical WDM networks with the homogeneous link capacities. Existing analytical models can also be extended to opaque WDM networking with heterogeneous link capacities due to the wavelength conversion at each switch node. However, existing analytical models cannot be utilized for all-optical WDM networking with heterogeneous structure of link capacities due to the wavelength continuity constraint and unequal numbers of wavelength channels on different links. In this work, a mathematical model is extended for computing approximate network blocking probabilities in heterogeneous all-optical WDM networks in which the path blocking is dominated by the link along the path with fewer number of wavelength channels. A wavelength assignment scheme is also proposed for dynamic traffic, termed as last-fit-first wavelength assignment, in which a wavelength channel with maximum index is assigned first to a lightpath request. Due to heterogeneous structure of link capacities and the wavelength continuity constraint, the wavelength channels with maximum indexes are utilized for minimum hop routes. Similarly, the wavelength channels with minimum indexes are utilized for multi-hop routes between source and destination pairs. The proposed scheme has lower blocking probability values compared to the existing heuristic for wavelength assignments. Finally, numerical results are computed in different network scenarios which are approximately equal to values obtained from simulations. Since January 2016, he is serving as Head of Department and an Assistant Professor in the Department of Electrical Engineering at UET, Peshawar-Jalozai Campus, Pakistan. From May 2013 to June 2015, he served Department of Telecommunication Engineering as an Assistant Professor at UET, Peshawar-Mardan Campus, Pakistan. He also worked as an International Internship scholar in the Fukuda Laboratory, National Institute of Informatics, Tokyo, Japan on the topic large-scale simulation for internet topology analysis. His research interests include design and analysis of optical WDM networks, network algorithms, network routing, and network resource optimization problems.

  8. A Dynamic Resilience Approach for WDM Optical Networks

    NASA Astrophysics Data System (ADS)

    Garg, Amit Kumar

    2017-12-01

    Optical fibres have been developed as a transmission medium to carry traffic in order to provide various services in telecommunications platform. Failure of this fibre caused loss of data which can interrupt communication services. This paper has been focused only on survivable schemes in order to guarantee both protection and restoration in WDM optical networks. In this paper, a dynamic resilience approach has been proposed whose objective is to route the flows in a way which minimizes the total amount of bandwidth used for working and protection paths. In the proposed approach, path-based protection is utilized because it yields lower overhead and is also suitable for global optimization where, in case of a single link failure, all the flows utilizing the failed link are re-routed to a pre-computed set of paths. The simulation results demonstrate that proposed approach is much more efficient as it provides better quality of services (QoS) in terms of network resource utilization, blocking probability etc. as compared to conventional protection and restoration schemes. The proposed approach seems to offer an attractive combination of features, with both ring like speed and mesh-like efficiency.

  9. Multiple Damage Progression Paths in Model-Based Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Goebel, Kai Frank

    2011-01-01

    Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active

  10. A multipath routing protocol based on clustering and ant colony optimization for wireless sensor networks.

    PubMed

    Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo

    2010-01-01

    For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.

  11. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  12. Generalized causal mediation and path analysis: Extensions and practical considerations.

    PubMed

    Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra

    2018-01-01

    Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.

  13. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  14. Evaluation of an improved technique for lumen path definition and lumen segmentation of atherosclerotic vessels in CT angiography.

    PubMed

    van Velsen, Evert F S; Niessen, Wiro J; de Weert, Thomas T; de Monyé, Cécile; van der Lugt, Aad; Meijering, Erik; Stokking, Rik

    2007-07-01

    Vessel image analysis is crucial when considering therapeutical options for (cardio-) vascular diseases. Our method, VAMPIRE (Vascular Analysis using Multiscale Paths Inferred from Ridges and Edges), involves two parts: a user defines a start- and endpoint upon which a lumen path is automatically defined, and which is used for initialization; the automatic segmentation of the vessel lumen on computed tomographic angiography (CTA) images. Both parts are based on the detection of vessel-like structures by analyzing intensity, edge, and ridge information. A multi-observer evaluation study was performed to compare VAMPIRE with a conventional method on the CTA data of 15 patients with carotid artery stenosis. In addition to the start- and endpoint, the two radiologists required on average 2.5 (SD: 1.9) additional points to define a lumen path when using the conventional method, and 0.1 (SD: 0.3) when using VAMPIRE. The segmentation results were quantitatively evaluated using Similarity Indices, which were slightly lower between VAMPIRE and the two radiologists (respectively 0.90 and 0.88) compared with the Similarity Index between the radiologists (0.92). The evaluation shows that the improved definition of a lumen path requires minimal user interaction, and that using this path as initialization leads to good automatic lumen segmentation results.

  15. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking

    PubMed Central

    Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua

    2018-01-01

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797

  16. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking.

    PubMed

    Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua

    2018-05-06

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.

  17. Measurement of Attenuation with Airborne and Ground-Based Radar in Convective Storms Over Land and Its Microphysical Implications

    NASA Technical Reports Server (NTRS)

    Tian, Lin; Heymsfield, G. M.; Srivastava, R. C.; Starr, D. OC. (Technical Monitor)

    2001-01-01

    Observations by the airborne X-band Doppler radar (EDOP) and the NCAR S-band polarimetric (S-POL) radar from two field experiments are used to evaluate the Surface ref'ercnce technique (SRT) for measuring the path integrated attenuation (PIA) and to study attenuation in deep convective storms. The EDOP, flying at an altitude of 20 km, uses a nadir beam and a forward pointing beam. It is found that over land, the surface scattering cross-section is highly variable at nadir incidence but relatively stable at forward incidence. It is concluded that measurement by the forward beam provides a viable technique for measuring PIA using the SRT. Vertical profiles of peak attenuation coefficient are derived in vxo deep convective storms by the dual-wavelength method. Using the measured Doppler velocity, the reflectivities at. the two wavelengths, the differential reflectivity and the estimated attenuation coefficients, it is shown that: supercooled drops and dry ice particles probably co-existed above the melting level in regions of updraft, that water-coated partially melted ice particles probably contributed to high attenuation below the melting level, and that the data are not readil explained in terms of a gamma function raindrop size distribution.

  18. Real-time network security situation visualization and threat assessment based on semi-Markov process

    NASA Astrophysics Data System (ADS)

    Chen, Junhua

    2013-03-01

    To cope with a large amount of data in current sensed environments, decision aid tools should provide their understanding of situations in a time-efficient manner, so there is an increasing need for real-time network security situation awareness and threat assessment. In this study, the state transition model of vulnerability in the network based on semi-Markov process is proposed at first. Once events are triggered by an attacker's action or system response, the current states of the vulnerabilities are known. Then we calculate the transition probabilities of the vulnerability from the current state to security failure state. Furthermore in order to improve accuracy of our algorithms, we adjust the probabilities that they exploit the vulnerability according to the attacker's skill level. In the light of the preconditions and post-conditions of vulnerabilities in the network, attack graph is built to visualize security situation in real time. Subsequently, we predict attack path, recognize attack intention and estimate the impact through analysis of attack graph. These help administrators to insight into intrusion steps, determine security state and assess threat. Finally testing in a network shows that this method is reasonable and feasible, and can undertake tremendous analysis task to facilitate administrators' work.

  19. Method and system for modulation of gain suppression in high average power laser systems

    DOEpatents

    Bayramian, Andrew James [Manteca, CA

    2012-07-31

    A high average power laser system with modulated gain suppression includes an input aperture associated with a first laser beam extraction path and an output aperture associated with the first laser beam extraction path. The system also includes a pinhole creation laser having an optical output directed along a pinhole creation path and an absorbing material positioned along both the first laser beam extraction path and the pinhole creation path. The system further includes a mechanism operable to translate the absorbing material in a direction crossing the first laser beam extraction laser path and a controller operable to modulate the second laser beam.

  20. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  1. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  2. Wireless Sensor Network Metrics for Real-Time Systems

    DTIC Science & Technology

    2009-05-20

    to compute the probability of end-to-end packet delivery as a function of latency, the expected radio energy consumption on the nodes from relaying... schedules for WSNs. Particularly, we focus on the impact scheduling has on path diversity, using short repeating schedules and Greedy Maximal Matching...a greedy algorithm for constructing a mesh routing topology. Finally, we study the implications of using distributed scheduling schemes to generate

  3. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    NASA Astrophysics Data System (ADS)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  4. The heliolongitudinal distribution of solar flares associated with solar proton events.

    PubMed

    Smart, D F; Shea, M A

    1996-01-01

    We find that the heliolongitudinal distribution of solar flares associated with earth-observed solar proton events is a function of the particle measurement energy. For solar proton events containing fluxes with energies exceeding 1 GeV, we find a Gaussian distribution about the probable root of the Archimedean spiral favorable propagation path leading from the earth to the sun. This distribution is modified as the detection threshold is lowered. For > 100 MeV solar proton events with fluxes > or = 10 protons (cm2-sec-ster)-1 we find the distribution becomes wider with a secondary peak near the solar central meridian. When the threshold is lowered to 10 MeV the distribution further evolves. For > 10 MeV solar proton events having a flux threshold at 10 protons (cm2-sec-ster)-1 the distribution can be considered to be a composite of two Gaussians. One distribution is centered about the probable root of the Archimedean spiral favorable propagation path leading from the earth to the sun, and the other is centered about the solar central meridian. For large flux solar proton events, those with flux threshold of 1000 (cm2-sec-ster)-1 at energies > 10 MeV, we find the distribution is rather flat for about 40 degrees either side of central meridian.

  5. Event dependence in U.S. executions

    PubMed Central

    Baumgartner, Frank R.; Box-Steffensmeier, Janet M.

    2018-01-01

    Since 1976, the United States has seen over 1,400 judicial executions, and these have been highly concentrated in only a few states and counties. The number of executions across counties appears to fit a stretched distribution. These distributions are typically reflective of self-reinforcing processes where the probability of observing an event increases for each previous event. To examine these processes, we employ two-pronged empirical strategy. First, we utilize bootstrapped Kolmogorov-Smirnov tests to determine whether the pattern of executions reflect a stretched distribution, and confirm that they do. Second, we test for event-dependence using the Conditional Frailty Model. Our tests estimate the monthly hazard of an execution in a given county, accounting for the number of previous executions, homicides, poverty, and population demographics. Controlling for other factors, we find that the number of prior executions in a county increases the probability of the next execution and accelerates its timing. Once a jurisdiction goes down a given path, the path becomes self-reinforcing, causing the counties to separate out into those never executing (the vast majority of counties) and those which use the punishment frequently. This finding is of great legal and normative concern, and ultimately, may not be consistent with the equal protection clause of the U.S. Constitution. PMID:29293583

  6. Peano-like paths for subaperture polishing of optical aspherical surfaces.

    PubMed

    Tam, Hon-Yuen; Cheng, Haobo; Dong, Zhichao

    2013-05-20

    Polishing can be more uniform if the polishing path provides uniform coverage of the surface. It is known that Peano paths can provide uniform coverage of planar surfaces. Peano paths also contain short path segments and turns: (1) all path segments have the same length, (2) path segments are mutually orthogonal at the turns, and (3) path segments and turns are uniformity distributed over the domain surface. These make Peano paths an attractive candidate among polishing tool paths because they enhance multidirectional approaches of the tool to each surface location. A method for constructing Peano paths for uniform coverage of aspherical surfaces is proposed in this paper. When mapped to the aspherical surface, the path also contains short path segments and turns, and the above attributes are approximately preserved. Attention is paid so that the path segments are still well distributed near the vertex of the surface. The proposed tool path was used in the polishing of a number of parabolic BK7 specimens using magnetorheological finishing (MRF) and pitch with cerium oxide. The results were rather good for optical lenses and confirm that a Peano-like path was useful for polishing, for MRF, and for pitch polishing. In the latter case, the surface roughness achieved was 0.91 nm according to WYKO measurement.

  7. Method and apparatus for monitoring characteristics of a flow path having solid components flowing therethrough

    DOEpatents

    Hoskinson, Reed L [Rigby, ID; Svoboda, John M [Idaho Falls, ID; Bauer, William F [Idaho Falls, ID; Elias, Gracy [Idaho Falls, ID

    2008-05-06

    A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.

  8. The “Path” Not Taken: Exploring Structural Differences in Mapped- Versus Shortest-Network-Path School Travel Routes

    PubMed Central

    Larsen, Kristian; Faulkner, Guy E. J.; Stone, Michelle R.

    2013-01-01

    Objectives. School route measurement often involves estimating the shortest network path. We challenged the relatively uncritical adoption of this method in school travel research and tested the route discordance hypothesis that several types of difference exist between shortest network paths and reported school routes. Methods. We constructed the mapped and shortest path through network routes for a sample of 759 children aged 9 to 13 years in grades 5 and 6 (boys = 45%, girls = 54%, unreported gender = 1%), in Toronto, Ontario, Canada. We used Wilcoxon signed-rank tests to compare reported with shortest-path route measures including distance, route directness, intersection crossings, and route overlap. Measurement difference was explored by mode and location. Results. We found statistical evidence of route discordance for walkers and children who were driven and detected it more often for inner suburban cases. Evidence of route discordance varied by mode and school location. Conclusions. We found statistically significant differences for route structure and built environment variables measured along reported and geographic information systems–based shortest-path school routes. Uncertainty produced by the shortest-path approach challenges its conceptual and empirical validity in school travel research. PMID:23865648

  9. Semianalytical computation of path lines for finite-difference models

    USGS Publications Warehouse

    Pollock, D.W.

    1988-01-01

    A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author

  10. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  11. Randomized shortest-path problems: two related models.

    PubMed

    Saerens, Marco; Achbany, Youssef; Fouss, François; Yen, Luh

    2009-08-01

    This letter addresses the problem of designing the transition probabilities of a finite Markov chain (the policy) in order to minimize the expected cost for reaching a destination node from a source node while maintaining a fixed level of entropy spread throughout the network (the exploration). It is motivated by the following scenario. Suppose you have to route agents through a network in some optimal way, for instance, by minimizing the total travel cost-nothing particular up to now-you could use a standard shortest-path algorithm. Suppose, however, that you want to avoid pure deterministic routing policies in order, for instance, to allow some continual exploration of the network, avoid congestion, or avoid complete predictability of your routing strategy. In other words, you want to introduce some randomness or unpredictability in the routing policy (i.e., the routing policy is randomized). This problem, which will be called the randomized shortest-path problem (RSP), is investigated in this work. The global level of randomness of the routing policy is quantified by the expected Shannon entropy spread throughout the network and is provided a priori by the designer. Then, necessary conditions to compute the optimal randomized policy-minimizing the expected routing cost-are derived. Iterating these necessary conditions, reminiscent of Bellman's value iteration equations, allows computing an optimal policy, that is, a set of transition probabilities in each node. Interestingly and surprisingly enough, this first model, while formulated in a totally different framework, is equivalent to Akamatsu's model ( 1996 ), appearing in transportation science, for a special choice of the entropy constraint. We therefore revisit Akamatsu's model by recasting it into a sum-over-paths statistical physics formalism allowing easy derivation of all the quantities of interest in an elegant, unified way. For instance, it is shown that the unique optimal policy can be obtained by solving a simple linear system of equations. This second model is therefore more convincing because of its computational efficiency and soundness. Finally, simulation results obtained on simple, illustrative examples show that the models behave as expected.

  12. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-07

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  13. Lotka-Volterra systems in environments with randomly disordered temporal periodicity

    NASA Astrophysics Data System (ADS)

    Naess, Arvid; Dimentberg, Michael F.; Gaidai, Oleg

    2008-08-01

    A generalized Lotka-Volterra model for a pair of interacting populations of predators and prey is studied. The model accounts for the prey’s interspecies competition and therefore is asymptotically stable, whereas its oscillatory behavior is induced by temporal variations in environmental conditions simulated by those in the prey’s reproduction rate. Two models of the variations are considered, each of them combining randomness with “hidden” periodicity. The stationary joint probability density function (PDF) of the number of predators and prey is calculated numerically by the path integration (PI) method based on the use of characteristic functions and the fast Fourier transform. The numerical results match those for the asymptotic case of white-noise variations for which an analytical solution is available. Several examples are studied, with calculations of important characteristics of oscillations, for example the expected rate of up-crossings given the level of the predator number. The calculated PDFs may be of predominantly random (unimodal) or predominantly periodic nature (bimodal). Thus, the PI method has been demonstrated to be a powerful tool for studies of the dynamics of predator-prey pairs. The method captures the random oscillations as observed in nature, taking into account potential periodicity in the environmental conditions.

  14. Lotka-Volterra systems in environments with randomly disordered temporal periodicity.

    PubMed

    Naess, Arvid; Dimentberg, Michael F; Gaidai, Oleg

    2008-08-01

    A generalized Lotka-Volterra model for a pair of interacting populations of predators and prey is studied. The model accounts for the prey's interspecies competition and therefore is asymptotically stable, whereas its oscillatory behavior is induced by temporal variations in environmental conditions simulated by those in the prey's reproduction rate. Two models of the variations are considered, each of them combining randomness with "hidden" periodicity. The stationary joint probability density function (PDF) of the number of predators and prey is calculated numerically by the path integration (PI) method based on the use of characteristic functions and the fast Fourier transform. The numerical results match those for the asymptotic case of white-noise variations for which an analytical solution is available. Several examples are studied, with calculations of important characteristics of oscillations, for example the expected rate of up-crossings given the level of the predator number. The calculated PDFs may be of predominantly random (unimodal) or predominantly periodic nature (bimodal). Thus, the PI method has been demonstrated to be a powerful tool for studies of the dynamics of predator-prey pairs. The method captures the random oscillations as observed in nature, taking into account potential periodicity in the environmental conditions.

  15. Optical system and method for gas detection and monitoring

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A. (Inventor); Sinko, John Elihu (Inventor); Korman, Valentin (Inventor); Witherow, William K. (Inventor); Hendrickson, Adam Gail (Inventor)

    2011-01-01

    A free-space optical path of an optical interferometer is disposed in an environment of interest. A light beam is guided to the optical interferometer using a single-mode optical fiber. The light beam traverses the interferometer's optical path. The light beam guided to the optical path is combined with the light beam at the end of the optical path to define an output light. A temporal history of the output light is recorded.

  16. Modeling the assembly order of multimeric heteroprotein complexes

    PubMed Central

    Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Shin, Woong-Hee

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be an indispensable approach for studying protein complexes. PMID:29329283

  17. Modeling the assembly order of multimeric heteroprotein complexes.

    PubMed

    Peterson, Lenna X; Togawa, Yoichiro; Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Roy, Amitava; Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be an indispensable approach for studying protein complexes.

  18. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  19. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  20. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

Top